Artificial Intelligence: The Big Picture is Transforming Rapidly
• 7 min read
- Brief: Alternative Investments
Get the latest in Research & Insights
Sign up to receive a weekly email summary of new articles posted to AMG Research & Insights.
Artificial intelligence (AI) is just beginning to flex its capabilities with the latest round of generative AI tools, and yet already it has helped transform how we interact with and use information.
It also has vulnerabilities—such as deepfakes (fake or manipulated audio and images) and incomplete or incorrect data—that are stoking concerns, but which can be overcome by fact-checking and, in time, technological progress.
Those vulnerabilities were amplified in October 2023 when President Biden issued an executive order requiring new safety assessments of the technology and ways to ensure the information it generates is accurate. The order builds on earlier agreements with leading AI companies.
A general assessment of generative AI and how some industries are using it today was presented by Lisa Calkins and Jeff Schmitz, co-founders of HalfBlast Studios, during a recent presentation on Oct. 26 at The Dome at AMG titled Artificial Intelligence I: The Big Picture. The video recording of this presentation is included at the top of this article.
A follow-up event on November 2, Artificial Intelligence II: Impact on Global Dynamics, Markets and Investing & a Long-term Look at the Future, expanded on the possibilities for AI.
TECHNOLOGY IS AT ANOTHER PIVOT POINT
AI, or using computer science and data to enable problem solving by machines, is not new, but it has undergone a series of transformational pivots over many decades. Each pivot point represents another technological leap.
Researchers’ goal for AI is to take over specific tasks that it can complete more efficiently and quickly than humans, such as ingesting and generating substantial amounts of information. It also can help recognize patterns and make predictions based on search queries, for example, the Google search function or chatbots (such as an automated response via a phone or website).
AI’s use also goes far beyond search tools on the Internet, and it is intertwined with many aspects of our daily lives. Already, it is an integral part of autonomous vehicles, facial recognition, spam filters and data security, even robotic machines used to clean your floor. Farmers use it to manage agricultural watering and soil analysis.
Today’s pivot point rests with generative AI and large language models (LLM) like ChatGPT and Bard. These chatbots respond to questions and can compose content, including letters, emails, reports, etc., using natural language or humanlike dialogue.
“Think of ChatGPT as (you) having a smart assistant,” Calkins said. It is an incredibly powerful tool that is exceptionally good at searching through and summarizing large amounts of data.
She presented two examples of how someone could begin to play with ChatGPT. The first involved asking ChatGPT about some things a visitor could do in Calgary over three days in November. She got a full trip itinerary! If you wanted to know more about restaurants, you could then ask about the best places to eat in Calgary or narrow the query further to a particular style of food.
The second example involved asking ChatGPT for help writing a thank you note to a child’s teacher. If unhappy with the output, you can ask for a shorter version, or something written in the style of a favorite author. The output (Chat GPT’s response) serves as a sample or example of how to approach writing the email. Sometimes all we need is a little inspiration.
“It’s not doing my job for me, but helping me do my job,” Calkins said.
OUTPUT CAN BE RIFE WITH MISINFORMATION
There was a word of caution, too, that AI is only a tool. While it can be extremely helpful in developing an outline or writing a piece of computer code, the LLM cannot tell you whether the data it is “scraping” from the Internet is accurate or not, such as it being outdated or a product of “garbage in, garbage out.” It also may prove difficult to responsibly source each statement. To do so accurately requires critical thinking by a human, as well as context.
We also should be aware of the potential for cyber criminals or “dark web” actors, where someone knowingly spreads fake news, Calkins said.
“Everything you read is not true,” she added.
HOW CAN YOU TELL IF AN ONLINE IMAGE IS FAKED?
In May 2023, a fake photo showing an explosion at the Pentagon briefly roiled government officials, news organizations, and financial markets after it was spread across social media. Similar actions by so-called bad actors are likely to grow before the tools to combat it become more prevalent, Calkins said. But those tools will come, she added, such as metadata, watermarks, and other technical systems embedded in an image to help distinguish fake from real.
For now, there can be clues to whether an image of a human is AI-generated:
- Earlobes may look unusual
- Hair appears inconsistent around head
- Jewelry may be mismatched
- Distorted backgrounds
- Peripheral items appear malformed
Examples of how generative AI is being used beyond ChatGPT:
New models for patient diagnostics
AI’s ability to rapidly compare multiple MRI scans, break down the images to a series of dots, and quickly detect slight differences that may be indiscernible to the human eye has the potential to significantly enhance (or even transform, some speculate) patient diagnostics. Another area of rapid progress is predictive analytics, where AI can compare patients diagnosed with pneumonia and identify which of them may be at greater risk because of additional factors such as asthma.
Remote diagnostic assistance and monitoring, robotic surgery, and the development of new drugs are other areas where AI is helping medicine advance.
Potential to enhance education
Students have grabbed AI and ran with it, Calkins said, but not always with the best results.
“Cheating is at a whole new level,” she said. “Any kid can take a picture of their math homework, and every problem is solved (via AI). It is literally that easy.”
Plagiarism is also on the rise, where students use AI to generate a term paper or project. But AI can just as easily help teachers verify students’ work through plagiarism detectors available online. ChatGPT, for example, offers GPTZero for free and is working on a separate solution specifically for educators.
The immediate response seems to be to want to ban AI. But the better answer may be to define the positive elements of AI and how it can be used to enhance teaching.
For instance, AI can quickly locate and highlight key words or topics required in a report or essay, saving time for the teacher to focus instead on how the student understood the assignment and laid out their thoughts.
Calkins also mentioned “intelligent tutoring,” in which a student uses an online tool for study help. Based on the student’s responses, the tool not only tracks progress but can also customize the lesson to topics where the student is struggling, often without human intervention. It also alerts the teacher to the student’s progress.
AI solutions, when used to assist in the classroom, have the “potential to raise the level of education in the world.”
HOW AMG CAN HELP
Not a client? Find out more about AMG’s Personal Financial Management (PFM) or to book a free consultation call 303-486-1475 or email us the best day and time to reach you.
This information is for general information use only. It is not tailored to any specific situation, is not intended to be investment, tax, financial, legal, or other advice and should not be relied on as such. AMG’s opinions are subject to change without notice, and this report may not be updated to reflect changes in opinion. Forecasts, estimates, and certain other information contained herein are based on proprietary research and should not be considered investment advice or a recommendation to buy, sell or hold any particular security, strategy, or investment product.