<\/i>","library":"fa-solid"},"layout":"horizontal","toggle":"burger"}" data-widget_type="nav-menu.default">

Top

Podcast #69: ChatGPT and Generative AI: Differences, Ecosystem, Challenges, Opportunities

Generative AI has been a hot topic, especially after the launch of ChatGPT by OpenAI. It has even exceeded Metaverse in popularity. From top tech firms like Google, Microsoft and Adobe to chipmakers like Qualcomm, Intel, and NVIDIA, all are integrating generative AI models in their products and services. So, why is generative AI attracting interest from all these companies?

While generative AI and ChatGPT are both used for generating content, what are the key differences between them? The content generated can include solutions to problems, essays, email or resume templates, or a short summary of a big report to name a few. But it also poses certain challenges like training complexity, bias, deep fakes, intellectual property rights, and so on.

In the latest episode of ‘The Counterpoint Podcast’, hostMaurice Klaehneis joined by Counterpoint Associate DirectorMohit Agrawaland Senior AnalystAkshara Bassito talk about generative AI. The discussion covers topics including the ecosystem, companies that are active in the generative AI space, challenges, infrastructure, and hardware. It also focuses on emerging opportunities and how the ecosystem could evolve going forward.

Click to listen to the podcast

Click here to read the podcasttranscript.

Podcast Chapter Markers

01:37 –Akshara on what is generative AI.

03:26 –Mohit on differences between ChatGPT and generative AI.

04:56 –Mohit talks about the issue of bias and companies working on generative AI right now.

07:43 –Akshara on the generative AI ecosystem.

11:36 –Akshara on what Chinese companies are doing in the AI space.

13:41 –Mohit on the challenges associated with generative AI.

17:32 –Akshara on the AI infrastructure and hardware being used.

22:07 –Mohit on chipset players and what they are actively doing in the AI space.

24:31 –Akshara on how the ecosystem could evolve going forward.

Also available for listening/download on:

AI Business Model on Shaky Ground

OpenAI, Midjourney and Microsoft have set the bar for chargeable generative AI services withChatGPT(GPT-4) and Midjourney costing $20 per month and Microsoft charging $30 per month for Copilot. The $20-per-month benchmark set by these early movers is also being used bygenerativeAI start-ups to raise money at ludicrous valuations from investors hit by the current AI FOMO craze. But I suspect the reality is that it will end up being more like $20 a year.

To be fair, if one can charge $20 per month, have 6 million or more users, and run inference onNVIDIA’slatest hardware, then a lot of money can be made. If one then moves inference from thecloudto the end device, even more is possible as the cost of compute for inference will be transferred to the user. Furthermore, this is a better solution for data security and privacy as the user’s data in the form of requests and prompt priming will remain on the device and not transferred to the public cloud. This is why it can be concluded that for services that run at scale and for the enterprise, almost all generative AI inference will be run on the user’s hardware, be it a智能手机, PC or a private cloud.

Consequently, assuming that there is no price erosion and endless demand, the business cases being touted to raise money certainly hold water. While the demand is likely to be very strong, I am more concerned with price erosion. This is because outside of money to rent compute, there are not many barriers to entry andPlatforms has already removed the only real obstacle to everyone piling in.

The starting point for a generative AI service is a foundation model which is then tweaked and trained byhumansto create the service desired. However, foundation models are difficult and expensive to design and cost a lot of money to train in terms of compute power. Up until March this year, there were no trained foundation models widely available, but that changed when Meta Platforms’ family of LlaMa models “leaked” online. Now it has become the gold standard for any hobbyist, tinkerer or start-up looking for a cheap way to get going.

Foundation models are difficult to switch out, which means that Meta Platforms now controls anAIstandard in its own right, similar to the way OpenAI controls ChatGPT. However, the fact that it is freely available online has meant that any number of AI services for generating text or images are now freely available without any of the constraints or costs being applied to the larger models.

Furthermore, some of the other better-known start-ups such as Anthropic are making their bestservicesavailable online for free. Claude 2 is arguably better than OpenAI’s paid ChatGPT service and so it is not impossible that many people notice and start to switch.

Another problem with generativeAI基础模型以外的服务,re are almost no switching costs to move from one service to another. The net result of this is that freely available models from the open-source community combined with start-ups, which need to get volume for their newly launched services, are going to start eroding the price of the services. This is likely to be followed by a race to the bottom, meaning that the real price ends up being more like $20 per year rather than $20 per month. It is at this point that the FOMO is likely to come unstuck as start-ups and generative AI companies will start missing their targets, leading to down rounds, falling valuations, and so on.

There are plenty of real-world use cases for generativeAI, meaning that it is not the fundamentals that are likely to crack but merely the hype and excitement that surrounds them. This is precisely what has happened to the元versewhere very little has changed in terms of developments or progress over the last 12 months, but now no one seems to care about it.

(This is a version of a blog that first appeared on Radio Free Mobile. All views expressed are Richard’s own.)

相关的帖子

Artificial Intelligence: Irrational Exuberance is in Full Swing

As surely as autumn and winter follow summer, the current exuberance aroundAIis not going to last simply because the machines remain incapable of living up to the expectations that have been set for them.

These cycles typically take the form of a discovery of some description followed by a ramping of expectations which in turn leads to large amounts of money being invested for fear of missing out (FOMO).

The problem is that the expectations that are set are always unrealistic, meaning that when the time comes to deliver on those expectations, disappointment sets in. This is followed by collapsing valuations, bankruptcies and forced consolidation as investors are no longer willing to suspend disbelief.

This is the fourth AI Hype cycle with the others occurring in the 1960s, 1980s and 2017-2019, and this hype cycle looks exactly the same as the others except that it is much larger. Looking at investment activity and news flow, it is also very clear exactly where we are in the cycle.

First, expectations

  • The ability of Large Language Models (LLMs) to mimic human behavior has convinced some of the big names (like Professor Geoffrey Hinton) that artificial superintelligence is now materially closer than it was before.
  • While LLMs do have some very useful and lucrative use cases, they still have no causal understanding of the tasks they are performing.
  • This is why they hallucinate, make the most basic factual errors and are generally completely unreliable.
  • Therefore, the machines remain as stupid as ever. There is no evidence whatsoever that these machines are able to think.
  • But the problem is that they are so good at pretending to think that they are able to fool the great minds that created them.
  • Instead, all they do is calculate statistical relationships, meaning that the big promises that have been made will not be kept.

Second, investment

  • There are already many examples of money being thrown at start-ups with valuations and fundamentals being an afterthought:
  • OpenAI’s $30-billion valuation with a corporate culture that doesn’t want to make any profit.
  • Inflexion AI raising $1.3 billion fromMicrosoftandNVIDIAat an estimated valuation of around $5 billion despite having only been around for a year and having no commercial product.
  • Mistral AI raising $113 million at a $260-million pre-money valuation despite being only a few weeks old with no revenues, no product and probably only the vaguest idea of what it is going to do.
  • This can be described as the very definition of a bubble where rationality gets lost in the mad rush toward the next big thing. A lot of shirts are going to be lost.

最新的创新在llm产生年代ome remarkable abilities which, no doubt, will be put to both good and lucrative use. However, the technology upon which they are based has not changed, meaning that the limitations that preventeddigital助理和usefu自主驾驶l for anything more than the most basic tasks are also going to trip LLMs up.

Furthermore, this is no longer the exclusive realm of the big, well-financed companies that can pay tens of millions of dollars for massive compute capacity, as the hobbyists and enthusiasts are now creating generative AI.Platforms’ series of LLMs called LlaMa are now freely available to anyone who wants to tinker and advances in training techniques have meant that it is possible to fine-tune a 7bn parameter model on a powerful laptop.

This is why there are models popping up all over the place that are completely free to use. Some of them actually work quite well. Hence, the pricing of $20 per month for services likeGPT-4, Perplexity AI and Midjourney may soon come under relentless pressure. This is really bad news for investors relying on spreadsheets for their return because no one seems to have modeled this scenario out.

The first sign of trouble will come when companies come back to the market after spending the money on fancy offices and expensive staff but nothing to show for the investments so far. This is when the down rounds begin, disillusionment sets in, reality makes its presence felt and winter begins.

One suspects this will begin sometime in the first half of 2024 and the fallout will not be pretty.

(This is a version of a blog that first appeared on Radio Free Mobile. All views expressed are Richard’s own.)


相关的帖子

AI: Meta Platforms No Longer the Laggard That It Was

元’s history with AI is bad. Between 2016 and 2020, most of its hiring involved humans to keep objectionable content to a minimum because its automated processes were not nearly good enough to keep Facebook out of trouble. This had a direct impact on the company’s profitability and in part is why the company has been able to cut so much cost recently without any material impact on its operations.

We have heard and read a lot about the weakness of Meta Platforms when it comes to AI, but it has improved a lot over the last two years and now is in a position to offer a challenge to the leaders.

  • During 2020, signs started to emerge that at least in research, Meta Platforms was beginning to make a proper contribution towards the body of knowledge in AI.
  • This has continued, and although very little has shown up in its products to date, it has also demonstrated good progress in the development oflarge language models (LLMs)which underpin the latest chat services that everyone is so excited about.
  • 元has an LLM called LlaMa, which exists in a range of sizes between 7bn parameters and 65bn parameters and these will underpin chatbots in Messenger, and WhatsApp.
  • Versions of LlaMa will also be retrained to improve photo and video editing in Instagram and Reels as well as to use for internal corporate processes.
  • However, where Meta has made its real impact is in the open-source community where its LlaMa foundation models have become the standard upon which thousands of hobbyists and enthusiasts have been tinkering with generative AI.
  • The open-source community has also been quick to adopt new AI techniques that the big companies have not, which has given it the ability to do on laptops what OpenAI and Google still need data centres to achieve.
  • This has caused some consternation among the big companies to the point that OpenAI is considering releasing the full version of GPT-3 with its weights to compete with LlaMa.
  • It is not clear how LlaMa reached the open-source community as foundation models are very difficult to switch in and out and all of the work currently going on is based on LlaMa.
  • LlaMa becoming the platform for open-source development means that Meta now has access to a very large supply of innovations on top of its model that it can use or build on to create other services.
  • This combined with the increase in the quality of academic research coming out of Meta Platforms is what has led us to upgrade Meta Platforms from a laggard to the middle of the pack.
  • In terms of pushing back the boundaries of AI, the two leaders remain Open AI and Google, but Meta Platforms is now right behind them alongside Baidu, ByteDance and SenseTime.
  • Part of the problem with assessing China is that the information flow around the development of cutting-edge technology in China has all but dried up due to the government’s moves to tighten national security.
  • Consequently, it is hard to say with a degree of certainty where the Chinese AI developments lie, but given how quickly open source has managed to catch up, it is difficult to think that the Chinese are not also hot on the heels of the leaders.
  • Therefore, Meta Platforms has greatly improved its position in AI. Its models are rapidly becoming a platform for development in the open-source community.

(This is a version of a blog that first appeared on Radio Free Mobile. All views expressed are Richard’s own.)

相关的帖子

Artificial Intelligence: Regulation Should Target Humans, Not Machines

The recent Congressional hearings in the US greatly increase the prospect of some form of regulation of theAIindustry, but this will have to be very carefully crafted to ensure that unintended consequences are minimised and that the US does not hamstring itself in the technological arms race with China.

  • OpenAI CEO Sam Altman, NYU professor Gary Marcus and Christina Montgomery from IBM jointlytestified在国会周二在一般constructive and non-combative session.
  • The first topic under discussion wasregulationand here, OpenAI and Microsoft are clearly open to working with the US government and even seem to welcome the prospect of some form of regulation.
  • What seems to be on the cards is a regulatory agency that issues licenses to operators over a certain level of scale. But even this is fraught with problems.
  • I continue to think that the real risk to humans is not from malevolent machines but from malevolent humans ordering the machines to do bad things. Therefore, some form of licensing may help.
  • At the same time, Altman and Markus called out the risk of atechnocracywhere AI is concentrated in the hands of a few large players who then would have unimaginable power to control and shape society.
  • This is one of the biggest dangers that would result from regulation because a regulatory environment increases the cost of doing business and creates a (often large) bias towards the larger companies as the smaller players cannot afford to comply.
  • This would see smaller players forced out of themarketand consolidation towards a few larger players which is exactly one of the things regulation seems to be seeking to avoid.
  • Limiting the development of AI is also a non-starter for two main reasons:
    • First, the genie is already out of the bottle.Large language models and the technologies and know-how of how to create them are already widely available in the open-source community.
    • So large is this community that there is speculation that the performance of open-source models may soon rival those of large companies.
    • Placing restrictions on development will only serve to drive development underground (bad scenario) ordriveit overseas (even worse scenario).
    • Consequently, this technology is going to be developed regardless of the regulatory environment and so the scheme that embraces it is far more likely to succeed than the one that slows or holds it back.
    • Second, technology rivalrywhere the US (and increasingly the West) is locked in an ideological struggle with China.
    • This battle is currently being fought in the technology sector and semiconductors in particular, but it is now also starting to move into AI.
    • Unlike semiconductors, the US and the West have a much weaker ability to restrict China’s development in this space as limiting access to training semiconductors will only slightly slow China’s development.
    • Hence, if the US intentionally hobbles its own development, then this will hand an advantage to China, the one thing that all parties in the US government agree is a bad idea.
  • Hence, I suspect that the best regulatory environment will be a low-touch system that is cheap and simple to comply with and targets restricting access of bad actors rather than the technology itself.
  • Other areas discussed included the management of copyrights for content owners whose content is used for training and then becomes the genesis of a novel creation.
  • This is not a new issue as a similar problem exists with DJs who sample music or extracts of content to create new tracks and so I suspect that this will be solved over time.
  • Employment was also discussed. Both Altman and Marcus were of the opinion that the job market faces no immediate danger although there is likely to be some change. This is broadly in line with my view.
  • This is the first time I have seen an industry asking to be regulated which gives a much better chance of getting regulation that is productive rather than the unintended consequences that so regularly occur when rules are unilaterally imposed.
  • I continue to think that the machines are as dumb as ever, but their size and complexity have greatly enhanced their linguistic skills even if they are simply calculating the probability of words occurring next to each other.
  • This creates a convincing illusion of sentience which leads people to anthropomorphise these systems, which in turn is what makes them much more capable of being used by bad actors.
  • Hence, humans are in far more danger from other humans than they are from the machines, and it is this that any regulation should target.

(This is a version of a blog that first appeared on Radio Free Mobile. All views expressed are Richard’s own.)

相关的帖子

Term of Use and Privacy Policy

Counterpoint Technology Market Research Limited

Registration

为了访问对位技术市场Research Limited (Company or We hereafter) Web sites, you may be asked to complete a registration form. You are required to provide contact information which is used to enhance the user experience and determine whether you are a paid subscriber or not.
Personal Information When you register on we ask you for personal information. We use this information to provide you with the best advice and highest-quality service as well as with offers that we think are relevant to you. We may also contact you regarding a Web site problem or other customer service-related issues. We do not sell, share or rent personal information about you collected on Company Web sites.

How to unsubscribe and Termination

You may request to terminate your account or unsubscribe to any email subscriptions or mailing lists at any time. In accessing and using this Website, User agrees to comply with all applicable laws and agrees not to take any action that would compromise the security or viability of this Website. The Company may terminate User’s access to this Website at any time for any reason. The terms hereunder regarding Accuracy of Information and Third Party Rights shall survive termination.

Website Content and Copyright

This Website is the property of Counterpoint and is protected by international copyright law and conventions. We grant users the right to access and use the Website, so long as such use is for internal information purposes, and User does not alter, copy, disseminate, redistribute or republish any content or feature of this Website. User acknowledges that access to and use of this Website is subject to these TERMS OF USE and any expanded access or use must be approved in writing by the Company.
– Passwords are for user’s individual use
– Passwords may not be shared with others
– Users may not store documents in shared folders.
– Users may not redistribute documents to non-users unless otherwise stated in their contract terms.

Changes or Updates to the Website

The Company reserves the right to change, update or discontinue any aspect of this Website at any time without notice. Your continued use of the Website after any such change constitutes your agreement to these TERMS OF USE, as modified.
Accuracy of Information: While the information contained on this Website has been obtained from sources believed to be reliable, We disclaims all warranties as to the accuracy, completeness or adequacy of such information. User assumes sole responsibility for the use it makes of this Website to achieve his/her intended results.

Third Party Links: This Website may contain links to other third party websites, which are provided as additional resources for the convenience of Users. We do not endorse, sponsor or accept any responsibility for these third party websites, User agrees to direct any concerns relating to these third party websites to the relevant website administrator.

Cookies and Tracking

We may monitor how you use our Web sites. It is used solely for purposes of enabling us to provide you with a personalized Web site experience.
This data may also be used in the aggregate, to identify appropriate product offerings and subscription plans.
Cookies may be set in order to identify you and determine your access privileges. Cookies are simply identifiers. You have the ability to delete cookie files from your hard disk drive.