AI Crypto

Notes from Davos: 10 things you should know about AI

The following is a guest post from John deVadoss.

Davos in January 2024 was about one theme – AI.

Vendors were hawking AI; sovereign states were touting their AI infrastructure; intergovernmental organizations were deliberating over AI’s regulatory implications; corporate chieftains were hyping AI’s promise; political titans were debating AI’s national security connotations; and almost everyone you met on the main Promenade was waxing eloquent on AI.

And yet, there was an undercurrent of hesitancy: Was this the real deal? Here then are 10 things that you should know about AI – the good, the bad and the ugly – collated from a few of my presentations last month in Davos.

  1. The precise term is “generative” AI. Why “generative”? While previous waves of innovation in AI were all based on the learning of patterns from datasets and being able to recognize these patterns in classifying new input data, this wave of innovation is based on the learning of large models (aka ‘collections of patterns’), and being able to use these models to creatively generate text, video, audio and other content.
  2. No, generative AI is not hallucinating. When previously trained large models are asked to create content, they do not always contain fully complete patterns to direct the generation; in those instances where the learned patterns are only partially formed, the models have no choice but to ‘fill-in-the-blanks’, resulting in what we observe as so-called hallucinations.
  3. As some of you may have observed, the generated outputs are not necessarily repeatable. Why? Because the generation of new content from partially learned patterns involves some randomness and is essentially a stochastic activity, which is a fancy way of saying that generative AI outputs are not deterministic.
  4. Non-deterministic generation of content in fact sets the stage for the core value proposition in the application of generative AI. The sweet spot for usage lies in use cases where creativity is involved; if there is no need or requirement for creativity, then the scenario is most likely not an appropriate one for generative AI. Use this as a litmus test.
  5. Creativity in the small provides for very high levels of precision; the use of generative AI in the field of software development to emit code that is then used by a developer is a great example. Creativity in the large forces the generative AI models to fill in very large blanks; this is why for instance you tend to see false citations when you ask it to write a research paper.
  6. In general, the metaphor for generative AI in the large is the Oracle at Delphi. Oracular statements were ambiguous; likewise, generative AI outputs may not necessarily be verifiable. Ask questions of generative AI; don’t delegate transactional actions to generative AI. In fact, this metaphor extends well beyond generative AI to all of AI.
  7. Paradoxically, generative AI models can play a very significant role in the science and engineering domains even though these are not typically associated with artistic creativity. The key here is to pair a generative AI model with one or more external validators that serves to filter the model’s outputs, and for the model to use these verified outputs as new prompt input for the subsequent cycles of creativity, until the combined system produces the desired result.
  8. The broad usage of generative AI in the workplace will lead to a modern-day Great Divide; between those that use generative AI to exponentially improve their creativity and their output, and those that abdicate their thought process to generative AI, and gradually become side-lined and inevitably furloughed.
  9. The so-called public models are mostly tainted. Any model that has been trained on the public internet has by extension been trained on the content at the extremities of the web, including the dark web and more. This has grave implications: one is that the models have likely been trained on illegal content, and the second is that the models have likely been infiltrated by trojan horse content.
  10. The notion of guard-rails for generative AI is fatally flawed. As stated in the previous point, when the models are tainted, there are almost always ways to creatively prompt the models to by-pass the so-called guard-rails. We need a better approach; a safer approach; one that leads to public trust in generative AI.

As we witness the use and the misuse of generative AI, it is imperative to look inward, and remind ourselves that AI is a tool, no more, no less, and, looking ahead, to ensure that we appropriately shape our tools, lest our tools shape us.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button