New Delhi: Generative AI became a buzzword this year, capturing the public’s imagination and sparking a rush between Microsoft and Alphabet to launch products with technology they believe will change the nature of work.
Here’s everything you need to know about this technology.
What is generative artificial intelligence?
Like other forms of AI, generative AI learns how to take action from past data. It generates entirely new content—text, an image, or even computer code—based on this training, rather than simply classifying or selecting data like other AIs.
The most popular generative AI application is ChatGPT, a chatbot launched by Microsoft-backed OpenAI late last year. Running an AI is known as a large language model because it takes a directed text and writes a human-like response from it.
GPT-4, the latest model OpenAI announced this week, is “multimedia” because it can ingest not only text, but images as well. On Tuesday, the head of OpenAI demonstrated how he can take a picture of a hand-drawn model of a website he wants to build, and from that create a real site.
What’s the use of it?
Demonstrations aside, companies are already putting generative AI to work.
This technique is useful for creating a first draft of marketing copy, for example, although it may require a cleanup because it’s not perfect. One example is from CarMax Inc (KMX.N), which has used a version of its OpenAI technology to summarize thousands of customer reviews and help shoppers decide which used car to buy.
Generative AI can also take notes during a virtual meeting. It can craft and customize emails, and it can create slide presentations. Microsoft Corp. and Alphabet Inc. Google both showed off these features in product announcements this week.
what’s wrong with that?
None, although there were concerns about potential misuse of the technology.
School systems have become concerned that students will hand in essays drafted by AI, undermining the hard work required for them to learn. Cybersecurity researchers have also expressed concern that generative AI could allow bad actors, even governments, to produce far more disinformation than before.
At the same time, the technology itself is prone to making mistakes. Realistic bugs that AI confidently touts, called “hallucinations,” and seemingly erratic responses like confessing love to a user are all reasons why companies aim to test the technology before it’s widely available.
Is this only about GOOGLE and MICROSOFT?
These two companies are at the forefront of research and investment in large language models, as well as the largest companies putting AI into widely used software such as Gmail and Microsoft Word. But she is not alone.
Large companies like Salesforce Inc (CRM.N) as well as smaller ones like Adept AI Labs are either building competing AI technology of their own or packaging technology from others to give users new powers through software.
How is Elon Musk involved?
He was a co-founder of OpenAI with Sam Altman. But the billionaire left the startup’s board in 2018 to avoid a conflict of interest between OpenAI’s work and artificial intelligence research conducted by Telsa Inc (TSLA.O) – the electric car maker he leads.
Musk has expressed concerns about the future of artificial intelligence and has fought for regulatory authority to ensure that the development of the technology serves the public interest.
“It’s a very dangerous technology. I’m afraid I’ve done some things to speed it up,” he said near the end of a Tesla Inc (TSLA.O) Investor Day event earlier this month.
“Tesla is doing good things in AI, I don’t know, it’s stressing me out, and I’m not sure what more to say about that.”