Skip to main content
  • Subscribe
  • Contact Us
  • FAQ
The Portfolio Strategy Group Light Logo The Portfolio Strategy Group Logo
Back
  • About Us
  • Wealth Planning
  • Investment Management
  • Our Team
  • Insights
  • News and Community
  • Contact Us
  • FAQ
  • Subscribe
  • Client Portal
    • Black Diamond
    • eMoney
    • Schwab Alliance
Thought Leadership
 

Artificial Intelligence & ChatGPT – Some Early Thoughts

January 16, 2024

ChatGPT, developed by OpenAI (a leading artificial intelligence research organization), is a generative artificial intelligence(AI) system capable of ‘understanding’ and generating human-like text in response to user prompts. It became publicly available in late 2022 and quickly became the buzzword of 2023. Every day, new articles, papers, books, and podcasts about AI and its potential impact on the world are being written. Both public and private companies have rushed to capitalize on this trend by publicly describing how they are integrating AI into their processes and procedures. But there are two questions rarely asked: what exactly is artificial intelligence and how does it work?

Let’s begin with definitions. Artificial intelligence is a broad field of computer science focused on creating ‘intelligent’ machines. These machines “read” software that is designed to perform tasks which typically require human intelligence. AI involves developing programs and models which enable machines to learn from data, recognize patterns, make decisions, and solve problems.

We can further break down artificial intelligence into three sub-categories:

  1. Rules-Based AI: rules-based AI relies on explicitly programmed rules to make decisions or perform tasks. These systems follow a set of predefined instructions to process information and generate output.
  2. Machine learning: a subset of AI focused on developing software to allow computers to “learn.” Instead of being programmed explicitly, the system will improve over time.
  3. Generative AI (such as ChatGPT): refers to systems that can generate new & original content by learning patterns from large datasets. When you ask this type of systema question, it will infer what the answer should be.

Now, if you find this confusing, let’s imagine that artificial intelligence is like a team of cooks in the kitchen, each of whom is trying to make the perfect pizza:

  1. A rules-based AI is like a skilled cook who follows instructions perfectly. The chef has a rule book which specifies exactly how much of each ingredient to use, how long to bake the pizza and, importantly, will not deviate from instructions. The chef consistently produces the same pizza based on pre-established rules.
  2. A machine learning AI is like a chef who learns to make a great pizza by tasting and adjusting the ingredients over time. There is no fixed recipe, but the chef adapts based on experience. Maybe the first time, the chef uses a bit too much garlic and pepper in her tomato sauce, the customers complain (provides feedback), and the chef learns from this experience and moderates her use of garlic and pepper moving forward.
  3. A generative AI, such as ChatGPT, is like a chef who has tasted millions and millions of slices of pizza (somehow without dying of a heart attack!). The chef has become a very creative pizza maker and she is able to invent entirely new recipes based on her experience tasting so many different pizzas and understanding the interaction between various flavors and textures.

In our artificial intelligence kitchen, the rules-based chef follows strict instructions, the machine learning chef learns and adapts from experience, and the generative chef creates innovative and novel recipes. Each chef represents a different form of AI and a different way of processing information.

So, why does AI matter? The hope is that artificial intelligence in its various forms will enhance company efficiency or potentially drive sales growth. Computers can complete tasks much more quickly than people. If complex problems can be solved by computers (rather than people),companies can save time and resources. However, there are the challenges to large scale adoption.

It’s easy to say that ‘Artificial Intelligence’ will make workers more efficient and aid in creating new products, but determining exactly how remains a challenge. Here are three examples in use or development today… (1) doctors are studying the use of AI to better diagnose brain tumors1, (2) GitHub (a company serving the needs of software developers) has developed an AI tool which will generate code for programmers2, (3) Ubisoft (a video game developer) is reportedly studying ways to incorporate generative AI into video games to create immersive and customized experiences for their customers.3

While there may be benefits, the cost of developing an artificial intelligence is high. It involves hiring consultants and computer scientists to develop the AI. Then, building or buying a computer capable of running the AI. Nvidia’s leading AI computer chips are selling for approximately $30,000 each, and it’s likely you would need more than one. Finally, once the system is established, you need to run it which creates ongoing costs. ChatGPT reportedly costs $700,000a day in computing costs alone!4 That does not include system maintenance and ongoing research.

On top of the clear financial burden, there are broader risks:                                                        

  • Machines can ‘hallucinate’ and provide incorrect answers – garbage in, garbage out! If bad or biased data is used to ‘train’ the AI system, the answers will be biased or wrong. Take the generative AI pizza chef as an example. She is inferring what pizza will taste best based on experience. However, there is no guarantee that the pizza will taste good. If she was ‘trained’ on only pizzas with anchovies, chances are if you ask her to make a pizza, you will be served a pizza with anchovies on it.
  • While we would like AI to be responsible and unbiased, there are no universally accepted standards defining what it means to be ‘responsible’ and ‘unbiased,’ let alone a mechanism to ensure compliance with any standards.
  • Leading AI organizations like Microsoft (among others) are first and foremost profit motivated entities which may not be fully considering the broader social and economic ramifications these systems could create.

In 1950, the mathematician and computer scientist Alan Turing asked the question “can machines think?”5 At the time, the answer was no. Today, the possibilities seem endless, and it has become much more difficult to answer with any degree of certainty. Let’s open ChatGPT6 and have a conversation with this machine. Does it seem human? You can decide for yourself the extent to which ChatGPT is “human-like.” The implications of machines possessing human-like intelligence are enormous, but they come with risks.


[1] https://www.nytimes.com/2023/10/11/health/ai-tumor-diagnosis-brain-cancer.html

[2] https://github.com/features/copilot

[3] https://www.cnn.com/world/generative-ai-video-games-spc-intl-hnk   

[4] https://www.washingtonpost.com/technology/2023/06/05/chatgpt-hidden-cost-gpu-compute/

[5] If you are interested, you can read more about the “Turing Test” here: https://plato.stanford.edu/entries/turing-test/

[6] You can access ChatGPT from your web browser here: https://chat.openai.com/

See more insights
PSG Logo Grayscale

The Portfolio Strategy Group, LLC
50 Main Street, Suite 1280
White Plains, NY 10606

914.288.4900 tel
800.535.5110
914.328.6670 fax

info@PSGwealth.com

  • Home
  • About Us
  • Wealth Planning
  • Investment Management
  • Our Team
  • Insights
  • News and Community
  • Subscribe
  • FAQ
Follow PSG on LinkedIn

Stay connected — Get regular updates from our LinkedIn page you can share across social networks.

Follow us on Linkedin
Contact Us
© Copyright 2025, The Portfolio Strategy Group, LLC. All rights reserved.
  • Privacy Policy
  • Terms of Use
  • Form CRS