<\/i>","library":"fa-solid"},"layout":"horizontal","toggle":"burger"}" data-widget_type="nav-menu.default">

Top

Podcast #69: ChatGPT and Generative AI: Differences, Ecosystem, Challenges, Opportunities

Generative AI has been a hot topic, especially after the launch of ChatGPT by OpenAI. It has even exceeded Metaverse in popularity. From top tech firms like Google, Microsoft and Adobe to chipmakers like Qualcomm, Intel, and NVIDIA, all are integrating generative AI models in their products and services. So, why is generative AI attracting interest from all these companies?

While generative AI and ChatGPT are both used for generating content, what are the key differences between them? The content generated can include solutions to problems, essays, email or resume templates, or a short summary of a big report to name a few. But it also poses certain challenges like training complexity, bias, deep fakes, intellectual property rights, and so on.

In the latest episode of ‘The Counterpoint Podcast’, hostMaurice Klaehneis joined by Counterpoint Associate DirectorMohit Agrawaland Senior AnalystAkshara Bassito talk about generative AI. The discussion covers topics including the ecosystem, companies that are active in the generative AI space, challenges, infrastructure, and hardware. It also focuses on emerging opportunities and how the ecosystem could evolve going forward.

Click to listen to the podcast

Click here to read the podcasttranscript

Podcast Chapter Markers

01:37 –Akshara on what is generative AI.

03:26 –Mohit on differences between ChatGPT and generative AI.

04:56 –Mohit talks about the issue of bias and companies working on generative AI right now.

07:43 –Akshara on the generative AI ecosystem.

11:36 –Akshara on what Chinese companies are doing in the AI space.

13:41 –Mohit on the challenges associated with generative AI.

17:32 –Akshara on the AI infrastructure and hardware being used.

22:07 –Mohit on chipset players and what they are actively doing in the AI space.

24:31 –Akshara on how the ecosystem could evolve going forward.

Also available for listening/download on:

apple podcasts logo spotify tune in radio google podcasts logo

AI Business Model on Shaky Ground

OpenAI, Midjourney and Microsoft have set the bar for chargeable generative AI services withChatGPT(GPT-4) and Midjourney costing $20 per month and Microsoft charging $30 per month for Copilot. The $20-per-month benchmark set by these early movers is also being used bygenerativeAI start-ups to raise money at ludicrous valuations from investors hit by the current AI FOMO craze. But I suspect the reality is that it will end up being more like $20 a year.

公平地说,如果一个人可以收取每月20美元,6 million or more users, and run inference onNVIDIA’slatest hardware, then a lot of money can be made. If one then moves inference from thecloudto the end device, even more is possible as the cost of compute for inference will be transferred to the user. Furthermore, this is a better solution for data security and privacy as the user’s data in the form of requests and prompt priming will remain on the device and not transferred to the public cloud. This is why it can be concluded that for services that run at scale and for the enterprise, almost all generative AI inference will be run on the user’s hardware, be it asmartphone, PC or a private cloud.

因此,假设没有价格厄洛斯ion and endless demand, the business cases being touted to raise money certainly hold water. While the demand is likely to be very strong, I am more concerned with price erosion. This is because outside of money to rent compute, there are not many barriers to entry andMetaPlatforms has already removed the only real obstacle to everyone piling in.

The starting point for a generative AI service is a foundation model which is then tweaked and trained byhumansto create the service desired. However, foundation models are difficult and expensive to design and cost a lot of money to train in terms of compute power. Up until March this year, there were no trained foundation models widely available, but that changed when Meta Platforms’ family of LlaMa models “leaked” online. Now it has become the gold standard for any hobbyist, tinkerer or start-up looking for a cheap way to get going.

Foundation models are difficult to switch out, which means that Meta Platforms now controls anAIstandard in its own right, similar to the way OpenAI controls ChatGPT. However, the fact that it is freely available online has meant that any number of AI services for generating text or images are now freely available without any of the constraints or costs being applied to the larger models.

Furthermore, some of the other better-known start-ups such as Anthropic are making their bestservicesavailable online for free. Claude 2 is arguably better than OpenAI’s paid ChatGPT service and so it is not impossible that many people notice and start to switch.

Another problem with generativeAIservices is that outside of foundation models, there are almost no switching costs to move from one service to another. The net result of this is that freely available models from the open-source community combined with start-ups, which need to get volume for their newly launched services, are going to start eroding the price of the services. This is likely to be followed by a race to the bottom, meaning that the real price ends up being more like $20 per year rather than $20 per month. It is at this point that the FOMO is likely to come unstuck as start-ups and generative AI companies will start missing their targets, leading to down rounds, falling valuations, and so on.

There are plenty of real-world use cases for generativeAI, meaning that it is not the fundamentals that are likely to crack but merely the hype and excitement that surrounds them. This is precisely what has happened to theMetaversewhere very little has changed in terms of developments or progress over the last 12 months, but now no one seems to care about it.

(This is a version of a blog that first appeared on Radio Free Mobile. All views expressed are Richard’s own.)

Related Posts

MediaTek to Focus on Automotive, Edge AI for Growth

  • The company saw a slight growth in Q2 revenues due to the improving demand for 5G SoCs.
  • Inventory came down to a relatively normal level.
  • MediaTek and NVIDIA have tied up to develop a full-scale product roadmap for the automotive industry.
  • Significant revenues are expected to be seen for MediaTek’s auto and custom ASIC segments from 2026.

[rev_slider alias=”mediatek-earnings”][/rev_slider]

MediaTek’srevenues were slightly up sequentially but down 43% annually in Q2 2023. Inventory has gradually come down to a relatively normal level, but the demand forsmartphoneswill remain slow due to the global macroeconomic situation and therefurbishedsmartphonemarket. Against this backdrop, MediaTek is diversifying its portfolio by focusing on theauto, smart edge and customASIC segments.The company is estimated to take over two years to get material revenues from these segments.

AI and ASIC Opportunity

CEO:“As for ASIC, we recently see growing enterprise ASIC business opportunities in AI and datacenter markets. With our strong IP and SoC integration capabilities, we aim to continue to grow this business in the future.”

Parv Sharma’s analyst take: “With the growth in generative AI, the demand for edge AI processing has accelerated. Being one of the top players in edge devices, MediaTek is well-positioned to benefit from this shift. The company will focus on winning enterprise ASIC projects but catching up with major players like Broadcom and Marvell will take time, as customers typically work with existing suppliers for repeat projects.”

Growing focus on auto and partnership with NVIDIA

CEO:“We’re very excited about the recently announced partnership between MediaTek and NVIDIA to develop a full-scale product roadmap for the automotive industry. We believe our industry-leading low-power processors and 5G, WiFi connectivity solutions, combined with NVIDIA’s strong capability in software and AI cloud, will help us become highly competitive in the future connected software-defined vehicles market and shorten our time to market to accelerate our growth.”

Shivani Parashar’s analyst take: “MediaTek launched Dimensity Auto to focus on cockpit and connectivity solutions. With its partnership with NVIDIA, the company aims to develop a full-scale product roadmap for the automotive industry. Auto design cycles are long so it will take some time (2026-2027) for the company to increase revenues from this segment. Overall, we can say the auto segment will become a long-term revenue growth driver for MediaTek.”

Customer and channel inventories come down

CEO:“We observed that customer and channel inventories across major applications have gradually reduced to a relatively normal level. Recent demand from our customers has shown certain level of stabilization. However, our customers are still managing their inventory cautiously as global consumer electronics end market demand remains soft. For the near-term, we expect our business to gradually improve in the second half of the year”

Shivani Parashar’s analyst take: “According to our supply chain checks, inventory levels are coming down and will get back to normal in the second half of 2023. OEMs will start restocking but will be cautious due to weak consumer demand and global macroeconomic conditions.”Mediatek revenues Result summary

  • Slight improvement in revenues: MediaTek recorded$3.2 billionin revenues in Q2 2023, a slight increase of 2% QoQ but a decrease of43% YoYdue to the weakglobal demandfor end products and the second-hand smartphone market. Customer and channel inventories across major applications have come down to a relatively normal level.
  • Maintained mobile segment revenue due to 5G SoCs:The mobile phonesegment contributed 46% to the company’s revenue in Q2 2023, which declined by51%YoY and increased by 2% QoQ. The demand for 5G SoCs improved during the quarter. The new flagship Dimensity SoC will be launched in the coming month.
  • New opportunities for smart edge:The smart edge segment contributed 47% to the company’s revenue in Q2, growing 2% sequentially. The demand for connectivity remained stable in the quarter. Business opportunities are growing for the ASIC segment.
  • Price discipline: MediaTek will focus on maintaining gross margin, following price discipline at a time of uncertainty in the global semiconductor industry.
  • Favorable guidance: MediaTek guided Q3 revenues in the range of$3.3 to $3.5 billion, growing 4%-11% sequentially. Gross margins are expected to be around47%while the operating expense ratio is expected to be around 32% in Q2 2023. The smartphone, connectivity and PMIC segments will see revenue growth. The smart TV segment will witness declining revenues in the third quarter due to excess inventory.
  • Autosegment is picking up:Automotivewill contribute$200 to $300million to MediaTek’s revenue in 2023. More significant revenue can be seen from 2026. The current auto design pipeline revenue for MediaTek is over $1 billion.

Related posts

TSMC Bullish on AI in Long Term

Weaker-than-expected macroeconomicsituationcontinuedto weigh on TSMC’sQ2 2023 business performance.Muted smartphone and PC/NB demand negatively impactedtheoverall utilization rateduring the quarterThough largely expected by the market, the company further cut its fullyear revenue guidance ontheweaker end demandexpected forH2 2023.However,TSMCprojects astrong AI demand inQ32023and,going forward,sees itself asthe key enablerfor AI GPUsandASICs that requirealarge diesize.We give ourtakeson the key points discussed during theearnings call:

Is AI semiconductor demand real?

  • Chairman(Mark Liu):Neither can we predict the near future, meaning next year, how the sudden demand will continue or will flatten out. However, our model is based on the data center structure. We assume a certain percentage of the data center processors areAIprocessors and based on that, we calculate the AI processor demand. And this model is yet to be fitted to the practical data later on. But in general, I think the trend of a big portion of data center processors will be AI processors is a sure thing. And will it cannibalize the data center processors? In the short term, when thecapexof the cloud service providers is fixed, yes, it will. It is. But as for the long term, when their data service – when the cloud services have the generative AI service revenue, I think they will increase the capex. That should be consistent with the long-term AI processor demand. And I mean the capex will increase because of the generative AI services.
  • Adam Chang’sanalyst take:Supply chain checks reveal that cloud service providers such as Microsoft, Google, and Amazon aggressively invest in AI servers.NVIDIAis continuing to add orders for the A100 and H100 to the supply chain, echoing the strong momentum for AI demand. TSMC holds a significant market share in AI semiconductorwaferproduction, mitigating the risk of misjudging CoWoS capacity expansion concerning AI demand.
  • Akshara Bassi’s analyst take:Over the medium term, as hyperscalers continue to develop their own proprietary AI models and look to monetize through AI-as-a-Service and simiilar models, the infrastructure demand should remain robust.

Can AI semiconductor demand offset short-term macro weakness?

  • CEO (Che-Chia Wei):Three months ago, we were probably more optimistic, but now it’s not. Also, for example, China economy’s recovery is actually also weaker than we thought. And so, the end market demand actually did not grow as we expected. So put all together, even if we have a very good AIprocessordemand, it’s still not enough to offset all those kinds of macro impacts. So, now we expect the whole year will be -10% YoY.
  • Adam Chang’s analyst take:Although there is a lot of promise around AI, it would only account for around 6% of total revenues in 2023. Therefore,AIis not a panacea for broader short-term demand weakness.

Is TSMC CoWoScapacity enough to fulfill current AI demand?

  • CEO (Che-Chia Wei):For AI, right now, we see very strong demand, yes. For the front-end part, we don’t have any problem to support, but for the back end, the advanced packaging side, especially for the CoWoS, we do have some very tight capacity to — very hard to fulfill 100% of what customers needed. So, we are working with customers for the short term to help them to fulfill the demand, but we are increasing our capacity as quickly as possible. And we expect these tightening will be released next year, probably toward the end of next year. Roughly probably 2x of the capacity will be added.
  • Adam Chang’s analyst take:Due to TSMC’s CoWoS capacity constraints, the company is finding it challenging to fulfill the strong AI demand from customers,, including NVIDIA,Broadcom, and Xilinx, at the moment. NVIDIA is actively seeking second- source suppliers asTSMClooks to outsource some of its production.

N3E/N3/N2 status

  • CEO (Che-Chia Wei):N3 is already involved in production with good yield. We are seeing robust demand for N3 and we expect a strong ramp in the second half of this year, supported by both HPC and smartphone applications. N3 is expected to continue to contribute mid-single-digit percentage of our total wafer revenue in 2023. Our N2 technology development is progressing well and is on track for volume production in 2025. Our N2 will adopt a narrow sheet transistor structure to provide our customers with the best performance, cost, and technology maturity.
  • Adam Chang’s analyst take:Apple is the sole customer expected to adopt TSMC’s 3nm technology in its A17 Bionic and M3 chips during 2023. TheQualcommSnapdragon 8 Gen 4 processor is also anticipated to join the TSMC 3nm family (N3E) in 2024. Moreover, Intel is likely to adopt TSMC’s 3nm technology for its Arrow LakeCPU, scheduled to launch in H2 2024.

그림1 4

Results summary

  • Q2 2023resultsbeatslightly:TSMC reported $15.67 billion in sales, slightly above the midpoint of guidance. EPS beat consensus due to higher non-operating income. Both GPM and OPM slightly beat guidance thanks to favorable FX and cost control efforts.
  • Q3 2023指导:管理指导16.7美元- 50亿美元(+ 9%不可小觑midpoint), gross margin in the range of 51.5%-53.5%, and operating margin in the range of 38%-40%. The gross margin dilution resulting from the N3 ramp-up would be 2-3 percentage points in Q3 2023 and 3-4 percentage points in Q4 2023. This impact would persist throughout the entire year of 2024, affecting the overall gross margin by 3-4 percentage points. Notably, this dilution is higher than the 2-3 percentage points gross margin dilution experienced during the N5’s second year of mass production in 2021.
  • 2023revenue guidance revised down but expected:TSMC revised down the full-year revenue guidance to -10% YoY. The management sees weaker-than-expected macroeconomics in H2 2023 affecting the demand for all applications except for AI.
  • Strong AIdemand, 50% revenue CAGR forecast:AI revenue currently makes up 6% of TSMC’s total revenue. The company anticipates a remarkable compound annual growth rate (CAGR) of nearly 50% from 2022 to 2027 in the AI sector. As a result of this significant growth, the AI revenue percentage share in TSMC’s total revenue is projected to reach the low teens by 2027.
  • CoWoScapacity expected to double by 2024 end:TSMC is experiencing strong demand in the AI sector, with sufficient capacity for the front-end part but facing challenges in advanced packaging, particularly CoWoS.It is working with customers to meet demand in the short term while rapidly increasing capacity which it expects to double by the end of 2024, easing the current tightness.

Related Posts

5G Advanced and Wireless AI Set To Transform Cellular Networks, Unlocking True Potential

最近的兴趣激增生成AI高lights the critical role that AI will play in future wireless systems. With the transition to 5G, wireless systems have become increasingly complex and more challenging to manage, forcing the wireless industry to think beyond traditional rules-based design methods.

5GAdvanced will expand the role of wireless AI across 5G networks introducing new, innovative AI applications that will enhance the design and operation of networks and devices over the next three to five years. Indeed, wireless AI is set to become a key pillar of5G Advancedand will play a critical role in the end-to-end (E2E) design and optimization ofwirelesssystems. In the case of 6G, wireless AI will become native and all-pervasive, operating autonomously between devices and networks and across all protocols and network layers.

E2E Systems Optimization

AIhas already been used in smartphones and other devices for several years and is now increasingly being used in the network. However, AI is currently implemented independently, i.e. either on the device or in thenetwork.因此,E2E系统性能最优化n across devices and network has not been fully realized yet. One of the reasons for this is that on-device AI training has not been possible until recently.

On-device AI will play a key role in improving the E2E optimization of 5G networks, bringing important benefits foroperatorsand users, as well as overcoming key challenges. Firstly, on-device AI enables processing to be distributed over millions of devices thus harnessing the aggregated computational power of all these devices. Secondly, it enablesAImodel learning to be customized to a particular user’s personalized data. Finally, this personalized data stays local on the device and is not shared with thecloud.This improves reliability and alleviates data sovereignty concerns. On-device AI will not be limited to just smartphones but will be implemented across all kinds of devices from consumer devices to sensors and a plethora of industrial equipment.

New AI-nativeprocessorsare being developed to implement on-device AI and other AI-based applications. A good example isQualcomm’s新Snapdragon X75 5 g modem-RF芯片,dedicated hardware tensor accelerator. Using Qualcomm’s own AI implementation, this Gen 2 AI processor boosts the X75’s AI performance more than 2.5 times compared to the previous Gen 1 design.

While on-device AI will play a key role in improving the E2E performance of5G networks, overall systems optimization is limited when AI is implemented independently. To enable true E2E performance optimization, AI training and inference needs to be done on a systems-wide basis, i.e. collaboratively across both the network and the devices. Making this a reality in wireless system design requires not only AI know-how but also deep wireless domain knowledge. This so-called cross-node AI is a key focus of 5G Advanced with a number of use cases being defined in 3GPP’s Release 18 specification and further use cases expected to be added in later releases.

Wireless AI: 5G Advanced Release 18 Use Cases

3GPP’s Release 18 is the starting point for more extensive use of wireless AI expected in6G.Three use cases have been prioritized for study in this release:

  • Use of cross-node Machine Learning (ML) to dynamically adapt the Channel State Information (CSI) feedback mechanism between a base station and a device, thus enabling coordinated performance optimization between networks and devices.
  • Use ofMLto enable intelligent beam management at both the base station and device, thus improving usable network capacity and device battery life.
  • Use of ML to enhance positioning accuracy of devices in both indoor and outdoor environments, including both direct and ML-assisted positioning.

Channel State Feedback:

CSI is used to determine the propagation characteristics of the communication link between a base station and a user device and describes how this propagation is affected by the local radio environment. Accurate CSI data is essential to provide reliable communications. With traditional model-based CSI, the user device compresses the downlink CSI data and feeds the compressed data back to the base station. Despite this compression, the signalling overhead can still be significant, particularly in the case of massive MIMO radios, reducing the device’s uplink capacity and adversely affecting its battery life.

An alternative approach is to use AI to track the various parameters of the communications link. In contrast to model-based CSI, a data driven air interface can dynamically learn from its environment to improve performance and efficiency. AI-based channel estimation thus overcomes many of the limitations of model-based CSI feedback techniques resulting in higher accuracy and hence an improved link performance. The is particularly effective at the edges of a cell.

Implementing ML-based CSI feedback, however, can be challenging in a system with multiple vendors. To overcome this, Qualcomm has developed a sequential training technique which avoids the need to share data across vendors. With this approach, the user device is firstly trained using its own data. Then, the same data is used to train the network. This eliminates the need to share proprietary, neural network models across vendors.Qualcomm has successfully demonstrated sequential training on massive MIMO radios at its 3.5GHz test network in San Diego (Exhibit 1)

Wireless AI
© Qualcomm Inc.

Exhibit 1: Realizing system capacity gain even in challenging non-LOS communication

AI-based Millimetre Wave Beam Management:

第二个用例涉及的使用impr毫升ove beam prediction on millimetre wave radios. Rather than continuously measuring all beams, ML is used to intelligently select the most appropriate beams to be measured – as and when needed. A ML algorithm is then used to predict future beams by interpolating between the beams selected – i.e. without the need to measure the beams all the time. This is done at both the device and the base station. As with CSI feedback, this improves network throughput and reduces power consumption.

Qualcomm recently demonstrated the use of ML-based algorithms on its 28GHz massive MIMO test network and showed that the performance of the AI-based system was equivalent to a base case network set-up where all beams are measured.

Precise Positioning:

The third use case involves the use of ML to enable precise positioning.Qualcomm has demonstrated the use of multi-cell roundtrip (RTT) and angle-of-arrival (AoA)-based positioning in an outdoor network in San Diego.The vendor also demonstrated howML-based positioning with RF finger printing can be used to overcome challenging non-line of sight channel conditions in indoor industrial private networks.

An AI-Native 6G Air Interface

6G will need to deliver a significant leap in performance and spectrum efficiency compared to 5G if it is to deliver even faster data rates and more capacity while enabling new 6G use cases. To do this, the 6G air interface will need to accommodate higher-order Giga MIMO radios capable of operating in the upper mid-band spectrum (7-16GHz), support wider bandwidths in new sub-THz 6G bands (100GHz+) as well as on existing 5G bands. In addition, 6G will need to accommodate a far broader range of devices and services plus support continuous innovation in air interface design.

To meet these requirements, the 6G air interface must be designed to be AI native from the outset, i.e. 6G will largely move away from the traditional, model-driven approach of designing communications networks and transition toward a data-driven design, in which ML is integrated across all protocols and layers with distributed learning and inference implemented across devices and networks.

这将是一个真正的颠覆性改变的方式communication systems have been designed in the past but will offer many benefits. For example, through self-learning, an AI-native air interface design will be able to support continuous performance improvements, where both sides of the air interface — the network and device — can dynamically adapt to their surroundings and optimize operations based on local conditions.

5G Advanced wireless AI/ML will be the foundation for much moreAIinnovation in 6G and will result in many new network capabilities. For instance, the ability of the 6G AI native air interface to refine existing communication protocols and learn new protocols coupled with the ability to offer E2E network optimization will result in wireless networks that can be dynamically customized to suit specific deployment scenarios, radio environments and use cases. This will a boon for operators, enabling them to automatically adapt their networks to target a range of applications, including various niche and vertical-specific markets.

Related Posts:

AI Drives Cloud Player Capex Amid Cautious Overall Spend

  • Cloud service providers’ capex is expected to grow by around 8% YoYin 2023 due to investments in AI and networking equipment.
  • Microsoft and Amazon are among the highest spenders as they invest in data center development. Microsoft will spend over 13% of its capex on AI infrastructure.
  • AI infrastructure can be 10x-30x more expensive than traditional general-purpose data center IT infrastructure.
  • Chinese hyperscalers’ capex is decreasing due to their inability to access NVIDIA’s GPU chips, and decreasing cloud revenues.

New Delhi, Beijing, Seoul, Hong Kong, London, Buenos Aires, San Diego –July 25, 2023

Globalcloudservice providers will grow capex by an estimated 7.8% YoY in 2023, according to the latest research from Counterpoint’sCloud Service.Higher debt costs, enterprise spending cuts and muted cloud revenue growth are impacting infrastructure spend in data centers compared to 2022.

Commenting on the large cloud service providers’ 2023 plans,Senior Research Analyst Akshara Bassisaid, “Hyperscalers are increasingly focusing on ramping up theirAIinfrastructure in data centers to cater to the demand for training proprietary AI models, launching native B2C generative AI user applications, and expanding AIaaS (Artificial Intelligence-as-a-Service) product offerings”.

According to Counterpoint’s estimates, around 35% of the total cloud capex for 2023 is earmarked for IT infrastructure including servers and networking equipment compared to 32% in 2022.

Global Cloud Service provider's Capex
Source: Counterpoint Research
2023 Capex Share
Source: Counterpoint Research

In 2023,MicrosoftandAmazon(AWS) will account for 45% of the total capex. US-based hyperscalers will contribute to 91.9% of the overall global capex in 2023.

Chinese hyperscalers are spending less due to slower growth in cloud revenues amid a weak economy and difficulties in acquiring the latestNVIDIAGPU chips for AI due to US bans. The scaled-down version – A800 of the flagship A100/H100 chips – that NVIDIA has been supplying to Chinese players may also come under the purview of the ban, further reducing access to AI silicon for Chinese hyperscalers.

Global Cloud Service Provider's AI spends as % of Total Capex, 2023
Source: Counterpoint Research

Based on Counterpoint estimates, Microsoft will spend proportionally the most on AI-related infrastructure with 13.3% of its capex directed towards AI, followed byGoogleat around 6.8% of its capex. Microsoft has already announced its intention to integrate AI within its existing suite of products.

AI infrastructure can be 10x-30x more expensive than traditional general-purpose data center IT infrastructure.

Though Chinese players are investing a larger portion of their spends towards AI, the amount is significantly less than that of the US counterparts due to a lower overall capex.

The comprehensive and in-depth ‘Global Cloud Service Providers Capex’ report is available. Please contact Counterpoint Research to access the report.

Background

Counterpoint Technology Market Research is a global research firm specializing in products in the technology, media and telecom (TMT) industry. It services major technology and financial firms with a mix of monthly reports, customized projects, and detailed analyses of the mobile and technology markets. Its key analysts are seasoned experts in the high-tech industry.

Analyst Contacts

Akshara Bassi

Untitled Copy Linkedin e1690440752733

Peter Richardson

Untitled Copy e1690440712900 Linkedin e1690440752733

Neil Shah

Untitled Copy e1690440712900 Linkedin e1690440752733

Follow Counterpoint Research

press@www.arena-ruc.com

Untitled Copy e1690440712900

Related Posts

How Far Has Technology Come in 20 Years?

Stable Diffusion Image of mixed technology2
Source: Created with Stable Diffusion

Twenty years ago, I was an equity analyst for a Wall Street investment bank. At the time, my research director liked to get all the analysts to write occasional thought pieces. In the following article written in June 2003, I chose to write a speculative piece that looked back to 2003 from five years in the future, i.e. 2008. I speculated that there would be quite a few technological leaps in the five intervening years.

Given the 20 years that have now passed since I wrote the article, how many of those technologies have actually come into being? As you will see, not many, while others that were not foreseen have matured – for example, app-based smartphones and music streaming.

Without specifically naming it as artificial intelligence, I foresaw a role for cloud-based intelligent software agents that would provide intuitive assistance in multiple situations – a true digital assistant. These have not come into being and they are not even much discussed. We do have digital assistants such as Apple’s Siri, Google Assistant or Amazon’s Alexa, but they are mostly incapable of anything more than answering simple questions and certainly couldn’t be trusted to book travel tickets, make restaurant reservations or update other people’s diaries. WhileChatGPTand derivatives ofLarge Language Modelsseem superficially smarter, they are still not yet at the stage of being able to function as a general assistant.

One other technology referenced in the article that is still far from maturity, is augmented reality. The glasses described were not too far-fetched – Microsoft’s HoloLens can achieve some of what is described and Epson and Vuzix, for example, have developed glasses that are in use by field service engineers. But these products are not able to reference real-world objects.Apple’s forthcoming Vision Pro, while technically brilliant, would not be a suitable solution for the use case described.

At the end of the article, I listed companies that I expected to be playing a significant role in the development of the various technologies highlighted. But where are those companies now?

For context, and for the younger readers, around the turn of this century, third-generation cellular licenses had been expensively auctioned in several countries and many mobile operators were struggling to generate a return on their investment. Oh, how things have changed (or not)! As an analyst covering mobile technology, I could see that investors were valuing mobile operators solely on their voice and text revenues, with zero value being ascribed to future data revenues. My article was also an attempt to awaken investors to the potential value beyond voice.

Anyway, here’s the report that I wrote in mid-2003. It was written as though it were an article in a business newspaper.

Special Report – June 2008

Connected People

It is just eight years since European wireless telecom companies became the subject of outright derision for spending billions of dollars on licenses to operate third-generation cellular networks. Now the self-same companies have become core to our everyday existence. Their stock, which bottomed in the middle of 2002, has risen steadily ever since.

The original promise of 3G technology was high-speed data networking coupled with an exceptional capacity for both voice and data. But critics said that it was an innovation users didn’t need, want or would be willing to pay for.

When the first commercial 3G networks appeared in 2003 and faltered at the first step, the doubters started to look dangerously like they had a point. But the universe is fickle and within the last two or three years, the combination of maturing networks and the inevitable power of Moore’s Law has started to deliver wireless devices and applications that would have been thought of, if not as science fiction, then at least science-stretching-the-bounds-of-credibility, when the licenses were issued.

However, while the long-time infamy of 3G means it is taking the starring role as industry watchers chart the chequered history of the technology, it is the supporting cast of technologies that has really delivered the goods. Without them, 3G would have remained just another method to access the backbone network.

The following snapshots from one perfectly ordinary day last month show how the coordinated application of a whole slew of technologies has subtly but distinctly altered our lives.

Bristol – May 1, 2008, 12:57 pm

Beads of sweat form on the face of Jim McKenna, a 24-year-old technician, as he studies the guts of a damaged generator. McKenna is a member of a rapid response team, looking after mission-critical power generation facilities across Southern England.

“Dave, I’ve located the damaged circuits, I think I can repair it, but the control unit is non-standard and I’ve not seen one like it before. Can you help me out here?”

McKenna’s voice is picked up by a tiny transducer microphone embedded in a Bluetooth-enabled hands-free earbud. The bud is so small it nestles unobtrusively in the technician’s ear. The earbud is wirelessly connected to the small transceiver on McKenna’s belt. His voice activates a ‘push’-to-talk connection to his controller in the Scottish technical support center. The word push is in quotes because it is his voice that effects the push, leaving McKenna’s hands entirely free.

In the Edinburgh-based command center, David Sanderson, an experienced engineer, maximizes the image from one of a half-dozen sub-screens that compete for his attention. Each screen shows live pictures from his team of technicians with data about their location and degree of job completion.

Sanderson taps the screen again and, 400 miles away in Bristol, a tiny camera on McKenna’s smart glasses zooms in on the generator specification plate. Sanderson peers intently at the screen:

“I see a code on the side panel. I’ve highlighted it for you. Can you scan it? I can then pull the circuit files for you”.

Seemingly in mid-air, a red circle appears around a barcode away to McKenna’s right. The heads-up display in McKenna’s glasses maintains a fix on the code even though he moves his head. He leans across and uses the camera to scan the code, which is instantaneously transmitted back to Edinburgh where the circuit plans are uploaded from the database. Sanderson extracts the relevant section before speaking again to McKenna.

“Jim, I’m initiating the synchronization, you should have it in a few seconds.”

The 3G transceiver on Jim’s belt receives the information and immediately routes it to his smart glasses via Bluetooth. As Jim looks at the damaged circuitry, the heads-up display begins to superimpose the circuit diagram over the actual circuits, adjusting for size. He spends a few minutes comparing the damaged circuits with the schematic images. He calls for more backup.

“Dave, the problem is definitely in this sector of the step-down circuit,” McKenna points to a series of circuit boards, “is there a suggested workaround in the troubleshooting file?”

Within minutes the heads-up display starts guiding McKenna through a series of measures that isolates and bypasses the damaged circuits. Within 20 minutes, McKenna successfully reboots the system – power is restored.

Five years ago, very little of the above could have been done as efficiently and intuitively. Field service engineers needed substantial experience to tackle complex tasks – they also had to carry heavy, often ruggedized PCs and a whole series of manuals on CD-ROMs. Technical backup, where available, was a cellular voice call.

Liverpool Street, London, May 1, 2008, 2:32 pm

Joanne King, an equity analyst, is meeting a buy-side client. As they settle into the soft leather chairs of the meeting room, she slides a flexible plastic sheet across the table. The sheet is printed with electronic ink. The latest marketing pack was downloaded to her mobile terminal on the way over in the taxi. She taps the screen of her smartphone and the slide set appears on the sheet. As Joanne and her client discuss the vagaries of the stock market, they are able to use virtual tabs to flip between ‘pages’ within the pack. When the client requests more information on the balance sheet of one of companies they’re discussing, Joanne is able to pull down the necessary information, adding it to the slide set.

经过讨论的中途,乔安妮听到子tle tone in her ear indicating an urgent communication request from her personal digital assistant. She apologizes to the client before initiating the communication path. “Wildfire, what’s the problem?” she knows that Wildfire will only override her no-interrupt rule if an issue requires immediate attention.

“An air traffic control strike in Paris has disrupted all flights. Your 6 pm Brussels flight is showing a two-hour delay and may be canceled. The best alternative is to take the Eurostar train. Services leave at 16:30 and 18:30.”

After a moment’s thought, Joanne comes to a decision: “Book the 16:30, please.” Conscious of the topics still to cover in her meeting, she adds, “Can you also have a taxi waiting when I am through here?”

Wildfire confirms the instructions and drops back into meeting mode. Joanne apologizes to the client and resumes her meeting. Meanwhile, Joanne’s software agent communicates with various travel services, canceling her flight reservation and booking the rail service.

Having learned from Joanne’s prior behavior, the agent books a First Class seat in a carriage toward the front of the train. The agent also communicates with a taxi firm – a car will be waiting when her meeting is completed. The agent is authorized to spend money within predefined limits. Simultaneously, the agent modifies Joanne’s expense report and calendar.

Joanne’s dinner date with friends in Brussels will be hard to keep given the change in travel plans. The agent negotiates with the diaries of her three dinner guests and the reservation computer at their chosen restaurant. A new reservation is agreed and four diaries are updated accordingly.

At the conclusion of her meeting, Joanne leaves the slide set contained in the pre-punched flexible display. Her client will be able to store it in standard folders and refer to it at leisure. Solar cells ensure that there is enough power to display the material without having to worry about battery charge.

As she heads for the taxi, Joanne’s location-aware PDA recognizes she is in motion and, therefore, ready to communicate. “Joanne, you have 2 voice messages, 23 business e-mails and 12 personal e-mails. How would you like me to handle them?” Joanne chooses to listen and respond to a voicemail on the short taxi ride to Waterloo, deferring the e-mails for the train.

在她的座位上的欧洲之星列车后,Joanne unfolds a screen and keyboard that work alongside her 3G smartphone. Bluetooth provides the link between the smartphone, screen and keyboard. The Light Emitting Polymer screen is extremely lightweight and flexible, yet delivers high contrast and color resolution. Power consumption is low.

Joanne spends an hour responding to the e-mails before kicking off her shoes and taking out an e-book to settle down to listen to some music. She is particularly looking forward to a new album she bought on the way to the station. A song she was unfamiliar with came over the radio in the taxi – loving it, but not knowing what it was, Joanne recorded a quick burst. Vodafone, her service provider, was able to identify the music and offered to sell her the single or album. In anticipation of her long train ride, she chose the album. Leaning back in her seat, she lets the cool beats ease her to Brussels.

In 2003, one-on-one presentations were either made from a PC screen or delivered on regular paper. Meeting interruptions were either obtrusive or impossible, and changing travel reservations on the fly typically required several people – often with intervention by the traveler herself. Meanwhile, mobile e-mail was possible but only on large-screen PCs, compromised by size, weight and power consumption, or devices with screens and keyboards too small for anything other than limited responses.

Hyde Park – May 1, 2:18 pm

Mike Lee is on his way home from high school. He flips his skateboard down three steps and dives for cover in the bushes, the sound of gunfire ringing in his ears. Peering through the leaves, he holds a small flat panel console in front of him. He scans through 120 degrees, concentrating on the screen. The intense rhythms of electro-house are now the loudest sounds he hears, but there is also the distant rap of gunfire. On the screen, he sees the surrounding park, but in addition, the occasional outlandish figure appears, flitting between hiding places among the trees. “Josh! Where are you?” Mike demands in an urgent whisper.

“I’m by the lake dude. Surrounded. Can you get down here? I’m running out of ammo.”

Mike swings around, looking toward the lake through his device. He sees Josh’s position highlighted on the screen. He turns back, takes a deep breath and starts jabbing buttons on his device. Explosions and smoke fill the screen. Then running to the path, he jumps back on his skateboard and carves down the hill to the lake, pitching into the shrubbery next to his buddy Josh. They proceed to engage the advancing enemy in a frenzy of laser grenades, gunfire and whoops of delight.

After a few minutes, they both hear the words they have been waiting for, “Well done men, you have completed Level 12. Hit the download button to move on to the next level.”

Mobile gaming, even as recently as 2003, offered a relatively poor user experience. Simple Java games were the norm. Games now not only involve online buddies but they are also immersed into the surrounding environment, massively enhancing the experience.

3G has come a long way from its ignominious start. However, the real catalyst that has made it a life-changing technology has been the incredible range of diverse technologies that have emerged to support the growth in wireless voice and data applications.

Cast List:

3G smartphones – Nokia, Motorola

Bluetooth earbuds – Sound ID

Heads-up display – Microvision

Voice-driven push-to-talk – Sonim

Voice control – Advanced Recognition Technologies

Personal digital assistant – Wildfire

Electronic ink pad – E Ink, Philips Electronics

Music capture – Shazam Entertainment

Foldable Light Emitting Polymer Display – Technology from Cambridge Display Technology

Augmented reality game console – Nokia N-Gage 4

Intelligent mobile agents – Hewlett Packard

Geo-location technology – Openwave

Where are these companies in 2023?

My original cast of technology characters has seen mixed fortunes, some are still around but with different owners while others have disappeared altogether. Few are still going in their original business niche:

NokiaandMotorolaare brands that are still making mobile devices, but in different guises than in 2003.

I don’t know what became ofSound ID.There is an app called SoundID created by Sonar Works, but it is different and unrelated to the Sound ID identified in the article. But Bluetooth True Wireless earbuds are now ahuge market

Microvisionis still in business but has shifted its focus to LiDAR in the automotive space.

Sonimis still in business and still making ruggedized devices, including push-to-talk devices for the safety and security sectors.

Advanced Recognition Technologieswas acquired by Scansoft in 2005.

Wildfirewas an innovative voice-controlled personal assistant that was acquired by the operator Orange in 2000. But Orange killed the service in 2005.

E-Inkstill exists, although Philips parted ways with it in 2005.

Shazamstill exists but was acquired by Apple in 2018. When it started in 2002, you had to dial a short number and hold your phone to the sound source. Users would then receive an SMS with the song title and artist.

Cambridge Display Technologyis still around. It was floated on Nasdaq in 2004 and acquired by Sumitomo Chemical in 2007.

Hewlett Packardis clearly still around. However, it doesn’t make intelligent software agents. But then again, neither does anyone else, at least not in the way portrayed in the article.

Openwaveno longer exists, although many of its businesses have been absorbed into other entities.

Artificial Intelligence: Irrational Exuberance is in Full Swing

As surely as autumn and winter follow summer, the current exuberance aroundAI去年只是贝科不会use the machines remain incapable of living up to the expectations that have been set for them.

这些周期通常采取的形式发现y of some description followed by a ramping of expectations which in turn leads to large amounts of money being invested for fear of missing out (FOMO).

The problem is that the expectations that are set are always unrealistic, meaning that when the time comes to deliver on those expectations, disappointment sets in. This is followed by collapsing valuations, bankruptcies and forced consolidation as investors are no longer willing to suspend disbelief.

This is the fourth AI Hype cycle with the others occurring in the 1960s, 1980s and 2017-2019, and this hype cycle looks exactly the same as the others except that it is much larger. Looking at investment activity and news flow, it is also very clear exactly where we are in the cycle.

First, expectations

  • The ability of Large Language Models (LLMs) to mimic human behavior has convinced some of the big names (like Professor Geoffrey Hinton) that artificial superintelligence is now materially closer than it was before.
  • While LLMs do have some very useful and lucrative use cases, they still have no causal understanding of the tasks they are performing.
  • This is why they hallucinate, make the most basic factual errors and are generally completely unreliable.
  • Therefore, the machines remain as stupid as ever. There is no evidence whatsoever that these machines are able to think.
  • But the problem is that they are so good at pretending to think that they are able to fool the great minds that created them.
  • Instead, all they do is calculate statistical relationships, meaning that the big promises that have been made will not be kept.

Second, investment

  • There are already many examples of money being thrown at start-ups with valuations and fundamentals being an afterthought:
  • OpenAI’s $30-billion valuation with a corporate culture that doesn’t want to make any profit.
  • Inflexion AI raising $1.3 billion fromMicrosoftandNVIDIAat an estimated valuation of around $5 billion despite having only been around for a year and having no commercial product.
  • Mistral AI raising $113 million at a $260-million pre-money valuation despite being only a few weeks old with no revenues, no product and probably only the vaguest idea of what it is going to do.
  • This can be described as the very definition of a bubble where rationality gets lost in the mad rush toward the next big thing. A lot of shirts are going to be lost.

The latest innovations around LLMs have produced some remarkable abilities which, no doubt, will be put to both good and lucrative use. However, the technology upon which they are based has not changed, meaning that the limitations that preventeddigitalassistants and autonomous driving from being useful for anything more than the most basic tasks are also going to trip LLMs up.

Furthermore, this is no longer the exclusive realm of the big, well-financed companies that can pay tens of millions of dollars for massive compute capacity, as the hobbyists and enthusiasts are now creating generative AI.MetaPlatforms’ series of LLMs called LlaMa are now freely available to anyone who wants to tinker and advances in training techniques have meant that it is possible to fine-tune a 7bn parameter model on a powerful laptop.

This is why there are models popping up all over the place that are completely free to use. Some of them actually work quite well. Hence, the pricing of $20 per month for services likeGPT-4, Perplexity AI and Midjourney may soon come under relentless pressure. This is really bad news for investors relying on spreadsheets for their return because no one seems to have modeled this scenario out.

The first sign of trouble will come when companies come back to the market after spending the money on fancy offices and expensive staff but nothing to show for the investments so far. This is when the down rounds begin, disillusionment sets in, reality makes its presence felt and winter begins.

One suspects this will begin sometime in the first half of 2024 and the fallout will not be pretty.

(This is a version of a blog that first appeared on Radio Free Mobile. All views expressed are Richard’s own.)


Related Posts

AI Needs to Reside in the Vehicle to Work Well

Mercedes is running a beta program where those that opt in will be able to accessChatGPTfrom their vehicle by interacting with thevoice assistantalready present in MBUX-equipped vehicles. But rather than the cloud-based service that Mercedes is going with today, it should be looking at implementing ChatGPT directly in the vehicle.

  • Mercedes owners in theUScan enroll for the program by accepting an update for their car.
  • The test is due to run for three months and is being supported by Microsoft’s Azure OpenAI Service, which is an API to which clients can connect their services to havegenerative AIfunctionality.
  • Mercedes is able to implement this service very easily because all it is really doing is providing a prompt for the vehicle assistant to fill in, send it to the cloud and then read out the results.
  • This means that all of the inference or processing of the request will be done in the cloud with the voice assistant doing nothing more than acting as a front end to provide the voice functionality.
  • The vehicle is a use case where generative AI could have a disproportionately large impact. This is because a touch-based icon grid is a substandard user experience no matter who provides it.
  • The problem that the car makers have is that their icon grid is much worse than Apple, Gooxgle orTesla.
  • Furthermore, in 2016 and 2017 we concluded that voice was the leading contender to improve the digital experience in the vehicle but that voice was not good enough to create an acceptable user experience.
  • This is why vehicles are still limping along with smartphones embedded in the dashboard.
  • We have also concluded that generative AI represents a significant step forward in the ability of machines to communicate with humans and provide a user interface for a digital service.
  • Consequently, generative AI offers a significant opportunity for vehicle makers to win back the digital initiative that they have ceded to the digital ecosystems.
  • This is extremely important as vehicle makers’ ability to monetize the market for in-vehicle digital service will be contingent on their ability to remain relevant in the digital vehicle experience.
  • This is why Apple andGoogleare coming aggressively after the vehicle and so far, the OEMs have mounted feeble resistance or offered complete capitulation.
  • The problem with this approach is that the only way to implement generative AI effectively in the vehicle is to put it directly in the vehicle.
  • This is because reliability and speed are critical, and in this example when the network goes, the service goes with it.
  • Furthermore, it is unlikely that there will be any real integration with the vehicle, meaning that telling ChatGPT that one is feeling hot is likely to result in silence rather than the air-conditioner being turned up.
  • Using ChatGPT as the benchmark implementation in the vehicle will have a profound impact on the cost of the vehicle’s electronics as well as its power consumption which in anEVis a deal breaker.
  • There are rapid developments going on in the open-source community that may make this a lot easier to achieve, but implementing large language models outside of the data center remains a work in progress.
  • Despite the current limitations, the potential for generative AI to help OEMs to overcome their digital shortcomings is substantial and represents one of the best opportunities the OEMs have had for a long time.
  • The risk is that if no one uses it as a result of the way the Mercedes experiment is implemented, it will lead to the (wrong) conclusion that putting it in the vehicle is a waste of time.
  • This would lead to the squandering of another opportunity, resulting in digital irrelevance and greater commoditization.
  • We remain pretty pessimistic about the outlook for the OEMs.

(This is a version of a blog that first appeared on Radio Free Mobile. All views expressed are Richard’s own.)

Related Posts

COMPUTEX 2023: AI Solutions, Capabilities in Focus

Tech giants showcased their most advanced solutions in AI and computing at the COMPUTEX 2023 show in Taipei in the first week of June. If NVIDIA CEO Jensen Huang’s keynote address focused on the company’s game-changing innovations around AI, Arm CEO Rene Haas’ keynote had compelling demonstrations showcasing Arm’s capabilities in AI. Qualcomm focused on on-device intelligence enhancement and Hybrid AI as the mainstream format ofAIin the future. With meaningful upgrades in computing capability, we expect to see the beginning of a new chapter in the coming years. In the following sections, we summarize the key takeaways from COMPUTEX 2023.

NVIDIA: Grace Hopper Superchip to boost AI revolutionS 10715151 e1687763755353

The world’s first-of-its-kind Grace Hopper Superchip, which is manufactured by the TSMC 4nm process node, is likely to level up NVIDIA’s determination on AI. In addition, more GPUs will be used for generative AI training and inference models, which willacceleratethe transformative technology in the near term, highlighted Mr. Huang.

Our Associate Director,Brady Wang,shared his ideas and insights at the influential event

S 10715143

NVIDIA also introduced the Spectrum-X platform, a fusion of the Spectrum-4 switch and Bluefield-3 DPUs, which boasts a record 51Tb/sec Ethernet speed and is tailor-made forAI networks.Combined with BlueField-3 DPUs and NVIDIA LinkX optics, it forms an end-to-end 400GbE network optimized for AI clouds. This innovation not only fits NVIDIA’s target but also consumes a great amount of foundry capacity, especially TSMC.

NVIDIA ACE Framework

Counterpoint Research - Nvidia ACE_Dark
Source: Nvidia

NVIDIA: Avatar Cloud Engine (ACE) for Games

NVIDIA did not forget its loyal gamers. This time, it introduced the Avatar Cloud Engine (ACE) for Games, a groundbreaking custom AI model service. ACE empowers non-playable characters (NPCs) in games with AI-driven natural language interactions, revolutionizing the gaming experience. With ACE, gamers can enjoy more immersive and intelligent gameplay.

Notes from analyst Q&A with NVIDIA founder and CEO Jensen Huang

  • If most of the workload involves training AI models, the data center operates as an AI factory. An optimal computer is capable of handling both training and inference tasks, although the selection of processors is contingent upon the specific inference type.Picture3
  • In the foreseeable future, AI is poised to become the predominant force within the realm of NPCs in video games. These NPCs will possess a distinctive narrative and contextual background, effortlessly engaging with one another in a harmonious manner. Their movements will be fluid and their comprehension of instructions will be exceptional.
  • AI revolutionizes the user experience onPCs, propelling the advancement of personalized recommender systems. Furthermore, even on compact smartphones, it taps into extensive personalized internet data. As a result, future interactions generate real-time, customized content, transitioning towards generative processing to accommodate the escalating demand for personalized information. This groundbreaking development signifies the dawning of a new era characterized by the proliferation of generated and augmented information, thereby departing from the previously dominant retrieval-centric paradigm.
  • InfiniBand excels in high-performance computing, offering superior throughput for single computers and AI factories. It dominates in supercomputers and AI systems, while Ethernet is prevalent in cloud environments.
  • China leads the way in cloud services, consumer internet and digital payments. It has swiftly advanced in electric and autonomous vehicles, showcasing local innovation through numerous GPU start-ups. This underscores China’s technological dominance and promising future growth.
  • Omniverse streamlines computer setup with cloud integration and partnerships, enabling effortless information streaming through browser-based access. It optimizes factory design and simulation, minimizing work, errors and expenses.

Arm: Everything now is a computer; AI runs on Arm

Citing the remarkable 260% increase in data center workloads from 2015 to 2021,ArmCEO Rene Haas emphasized the pivotal role of data centers, automotive technology and AI in driving the compute demand powered by ARM designs.

According to Counterpoint Research,Arm-based notebookswill gain over Intel and AMD, almost doubling their shipment share to 25% by 2027 from 14% today.

Laptop Shipment Share by CPU/SoC Type %

Counterpoint Research - laptop shipment by CPU type

source: Counterpoint Research

AI emerged as a focal point during Haas’ keynote, where he captivated the audience with compelling demonstrations showcasing Arm’s capabilities. That said, Arm is poised to support an even broader range of applications in the future.

We also echo Arm’s view and believe that generative AI, digital twins and edge computing will emerge as top technology trends in 2023 and affect the whole tech industry.

Qualcomm: Focus on on-device intelligence enhancement, Hybrid AI

Qualcomm’s “AI” Hexagon processor offers 3-5 times better computing performance compared to existing CPU/GPU solutions. The company aims to expand the application of its Snapdragon 8cx Gen 3 processor to the laptop industry with thorough support from Microsoft.

对于AI来说,高通相信有局限性the efficiency improvements of Edge AI or Cloud AI. However, leveraging Qualcomm’s connectivity solutions and combining edge AI with cloud AI can provide incremental benefits in terms of cost/energy savings, privacy and security enhancements, reliability, and latency. Therefore, the company guides that Hybrid AI (Edge AI + Cloud AI) will be the mainstream format of AI in the future.

Conclusion

With the COVID-19 pandemic loosening its grip,COMPUTEXwas back in its on-site mode this year in Taipei with solid and eye-catching AI solutions. We are stepping into a newcomputingera, with generative AI set to transform our lives. Not only server vendors and data center hyper-scalers, but mobile and PC vendors are also working together to facilitate technology improvements with AI support.

Counterpoint’s analysts continue to work closely with the tech product market to monitor all changes and trends.

Related Posts

Term of Use and Privacy Policy

Counterpoint Technology Market Research Limited

Registration

In order to access Counterpoint Technology Market Research Limited (Company or We hereafter) Web sites, you may be asked to complete a registration form. You are required to provide contact information which is used to enhance the user experience and determine whether you are a paid subscriber or not.
当你注册个人信息我们问you for personal information. We use this information to provide you with the best advice and highest-quality service as well as with offers that we think are relevant to you. We may also contact you regarding a Web site problem or other customer service-related issues. We do not sell, share or rent personal information about you collected on Company Web sites.

How to unsubscribe and Termination

You may request to terminate your account or unsubscribe to any email subscriptions or mailing lists at any time. In accessing and using this Website, User agrees to comply with all applicable laws and agrees not to take any action that would compromise the security or viability of this Website. The Company may terminate User’s access to this Website at any time for any reason. The terms hereunder regarding Accuracy of Information and Third Party Rights shall survive termination.

Website Content and Copyright

This Website is the property of Counterpoint and is protected by international copyright law and conventions. We grant users the right to access and use the Website, so long as such use is for internal information purposes, and User does not alter, copy, disseminate, redistribute or republish any content or feature of this Website. User acknowledges that access to and use of this Website is subject to these TERMS OF USE and any expanded access or use must be approved in writing by the Company.
– Passwords are for user’s individual use
– Passwords may not be shared with others
– Users may not store documents in shared folders.
– Users may not redistribute documents to non-users unless otherwise stated in their contract terms.

Changes or Updates to the Website

The Company reserves the right to change, update or discontinue any aspect of this Website at any time without notice. Your continued use of the Website after any such change constitutes your agreement to these TERMS OF USE, as modified.
Accuracy of Information: While the information contained on this Website has been obtained from sources believed to be reliable, We disclaims all warranties as to the accuracy, completeness or adequacy of such information. User assumes sole responsibility for the use it makes of this Website to achieve his/her intended results.

Third Party Links: This Website may contain links to other third party websites, which are provided as additional resources for the convenience of Users. We do not endorse, sponsor or accept any responsibility for these third party websites, User agrees to direct any concerns relating to these third party websites to the relevant website administrator.

Cookies and Tracking

We may monitor how you use our Web sites. It is used solely for purposes of enabling us to provide you with a personalized Web site experience.
This data may also be used in the aggregate, to identify appropriate product offerings and subscription plans.
Cookies may be set in order to identify you and determine your access privileges. Cookies are simply identifiers. You have the ability to delete cookie files from your hard disk drive.
This site is registered onwpml.orgas a development site.