The outlook for AI in 2024 looks very positive to me. There are areas of concern but overall, I believe there will be a lot of positive developments. Generative AI is already helping organizations differentiate themselves. Companies are working to achieve seamless integration into daily operations. An example I encountered this week is at eBay. I put my wife’s old iPhone up for sale. After I filled out all the details: model number, memory, color, features, etc., the next step in the workflow is to write a description. Below the text box was a button which said, “Let AI right your description”. I clicked and immediately the description was written into the text box. It was clear, concise, and well worded. This is a win for the buyer, the seller, and eBay.
The way I see it is 2023 was a year of news and onlookers, 2024 will see a shift from AI onlookers to and AI implementers. I believe businesses will embrace AI, build it into their strategies for the long term and focus on building it into their daily operations in areas which can benefit the most. Those organizations which do not achieve an effective focus on AI will be left behind. The good news is companies do not have to reinvent the wheel. All of the IT and IT solutions providers are building AI into their products. Microsoft announced this week it is adding an AI button on all Windows PC keyboards. The button will automatically engage the Microsoft AI Co-pilot.
The scramble is underway. I asked the Bard AI for a list of the top ten AI companies. It appropriately said, “Defining the top ten AI companies can be subjective and depend on various factors like market valuation, research impact, public recognition, and specific areas of focus within AI.” Very true. However, here are ten notable companies making significant contributions in different aspects of AI, in no particular order:
This is hardly an exhaustive list, and other companies like Anthropic, Palantir, SenseTime, and Perplexity are raising billions of dollars of venture capital. Google committed $500 million up front with another $1.5 billion over time for Anthropic. Jeff Bezos has invested in Perplexity. It is much too soon to say who the winners will be. The big guys are the most visible target for regulation and governance. Europe and China have advanced significant regulation aimed at the safety of AI. Our Divided House and the Assisted Living Senate are asleep at the switch. Hopefully, they will get to work this year on implementing reasonable regulations.
Meanwhile, there is anticipated growth in AI lawsuits, with a focus on intellectual property, ethical concerns, and transparency. The big news in the legal space is The New York Times has filed a lawsuit against Microsoft and OpenAI, alleging the two companies misused the Times’ copyrighted journalistic content to train their artificial intelligence chatbots, Microsoft Co-pilot and OpenAI’s chatGPT. The Times will likely be seeking billions of dollars in damages and a permanent injunction to prevent further use of their material.
I expect this suit will play out over years, not months. It will likely end up at the Supreme Court. By the way, Chief Justice Roberts said a lot about AI in his annual report. I read his comments as mostly positive. He acknowledged the potential of AI to improve access to justice, automate legal research, and expedite case resolution. He recognized the potential of AI to increase efficiency within the court system, potentially freeing up judges for more crucial tasks. He also mentioned the potential benefits of AI in various fields, like healthcare, transportation, and scientific research. His concerns were mostly about privacy violations and algorithmic bias in AI applications, urging caution and responsible development. Roberts addressed the concern of AI fueling the spread of misinformation and the need for careful verification and fact-checking within the technology. He expressed reservations about AI replacing human judges entirely, emphasizing the importance of human discretion and ethical considerations in legal decision-making. I was impressed with the depth of understanding the Chief Justice displayed.
Back to the big lawsuit. The key points were:
Copyright infringement: The Times claims Microsoft and OpenAI unlawfully accessed and copied vast amounts of its articles and other content without permission or compensation. This allegedly includes millions of words across various topics.
Unfair competition: The Times argues that using its content to train AI chatbots creates competing products that can potentially erode their readership and advertising revenue.
Threat to journalism: The lawsuit raises concerns about the broader implications for the news industry, suggesting that tech giants might exploit news organizations without giving them due credit or financial compensation.
Both defendants have yet to file formal responses in court. However, they have previously defended their use of publicly available data for AI training, arguing that it falls under fair use exceptions for copyright law. At a minimum, seems to me, the lawsuit will set a precedent for how copyrighted material can be used in AI training, impacting copyright law and the relationship between news organizations and tech companies. The legal battle is likely to be complex and lengthy, with potential implications for the future of AI development and journalistic content protection.
P.S. The core issue of the New York Times suit is the Large Language Models (LLMs) which are based on a sweep of everything on the Internet. The entire Internet obviously includes a lot of copyrighted material (in addition to misinformation). In 2024, I expect to see smaller more specialized LLMs. An example would be an LLM containing anonymized information from all of the Electronic Health Records. I believe such “mini LLMs” will surprise us with huge benefits.
Note: I use Bard AI and other AI chatbots as my research assistants. AI can boost productivity for anyone who creates content. Sometimes I get incorrect data from AI, and when something looks suspicious, I dig deeper. Sometimes the data varies by sources where AI finds it. I take responsibility for my posts and if anyone spots an error, I will appreciate knowing it, and will correct it.