We all know by now that text-based generative AI tools such as Dall-E 2 and Stable Diffusion “create works of their own by drawing on patterns they’ve identified in vast troves of existing, human-created content”. Late last year, speech-based ChatGPT (short for Chat Generative Pre-Trained Transformer), whose job is “optimising language models for dialogue”, arrived like a bat out of hell, attracting more than a million users within five days of its release.
ChatGPT was new, fresh and exciting and it has been so well received that Microsoft, a big investor in owner OpenAI, has got involved in its wild public beta testing programme, AKA “reinforcement learning from human feedback”.
Now, other big tech firms are grappling with how to jump on the bandwagon and approve the use of AI tools in order to catch up, even if it means turning a blind eye to potential safety concerns around the sharing of false information, deepfakes, plagiarism and cheating at exams.
Google-trained engineers are leaving for or starting new, innovative AI companies and the tech giant is under threat. Is there a risk that the world’s preeminent search engine is about to be usurped by a tool that is more natural and fun to use, even if it is prone to mistakes?
For some useful information on the basics of ChatGPT, take a look at this article on Digital Trends by Fionna Agomuoh. If you would like practical advice, guidance and support on how to safely start using generative AI tools, please get in touch – firstname.lastname@example.org.
Source: The Washington Post