本週,英偉達CEO黃仁勳接受了《No Priors》節目主持人的採訪,就英偉達的十年赌注、x.AI超級集群的快速發展、NVLink技術創新等AI相關話題進行了一場深度對話。

黃仁勳表示,沒有任何物理定律可以阻止將AI資料中心擴展到一百萬個晶片,儘管這是一個難題,多家大公司包括OpenAI、Anthropic、GoogleMeta微軟等,都在爭奪AI領域的領導地位,競相攀登科技的高峰,但重新創造智能的潛在回報是如此之大,以至於不能不去嘗試。

摩爾定律曾是半導體產業發展的核心法則,預測晶片的電晶體數目每兩年會翻倍,從而帶來性能的持續提升。然而,隨著物理極限的接近,摩爾定律的速度開始放緩,晶片效能提升的瓶頸逐漸顯現。

為了解決這個問題,英偉達將不同類型的處理器(如GPU、TPU等)結合起來,透過並行處理來突破傳統摩爾定律的限制。黃仁勳表示,未來10年,計算性能每年將翻倍或三倍,而能源需求每年將減少2-3倍,我稱之為“超摩爾定律曲線”。

黃仁勳也提到,我們現在可以將AI軟體擴展到多個資料中心:“我們已經做好準備,能夠將計算擴展到前所未有的水平,而我們正處於這一領域的起步階段。”

以下是黃仁勳講話的亮點:

1.我們在未來10年進行了重大的投資。我們正在投資基礎設施,打造下一代AI計算平台。我們在軟體、架構、GPU以及所有實現AI開發所需的組件上都進行了投資。

2.摩爾定律,即電晶體數目每兩年翻倍的預言,曾經是半導體產業的成長指南。然而,隨著物理極限的接近,摩爾定律已不再能够单独推动芯片性能的提升。為了解決這個問題,英偉達採用了類似於“異構計算”的方式,即將不同類型的處理器(如GPU、TPU等)結合起來,通过并行处理来突破传统摩爾定律的限制。英偉達的技術創新,如CUDA架構和深度學習優化,使得AI应用得以在超越摩爾定律的环境中高速运行。

3.我們推出了NVLink作為互連技術,它使得多個GPU能夠協同工作,每個GPU處理工作負載的不同部分。透過NVLink,GPU之間的頻寬和通訊能力大幅提升,使得資料中心能夠擴展並支持AI工作負載。

4.未來的AI應用需要動態和彈性強的基礎設施,能夠適應各種規模和類型的AI任務。因此,英偉達致力於建構可以靈活配置和高效運作的基礎設施,滿足從中小型AI專案到超大規模超級運算叢集的需求。

5.建構AI資料中心的關鍵是要同時優化效能和效率。在AI工作負載中,你需要龐大的電力,而散熱成為一個巨大的問題。所以我們花了大量時間優化資料中心的設計和運營,包括冷卻系統和電力效率。

6.在硬體快速發展的背景下,保持軟體與硬體架構的兼容性顯得格外重要。黃仁勳提到,我們必須確保我們的軟體平台,如CUDA,可跨代硬體使用。開發者不應當每次我們推出新晶片時都被迫重寫程式碼。因此,我們確保保持向後相容,並讓軟體能夠在我們開發的任何新硬體上高效運行。

7.我們正在建造一個超級集群,叫做X.AI,它將成為世界上最大的AI超級運算平台之一。這個超級集群將提供支持一些最雄心勃勃的AI項目所需的運算能力。這是我們推動AI前進的一大步。

8.擴充AI資料中心的一個大挑戰是管理它們消耗的巨大能源。問題不僅僅是構建更大、更快的系統。我們還必須處理運行這些超大規模系統時面臨的熱能和電力挑戰。為了應對這一切,需要創新的工程技術來確保基礎設施能夠應對。

9.AI在晶片設計中的作用日益重要,黃仁勳指出,AI已經在晶片設計中發揮重要作用。我們使用機器學習來幫助設計更有效率的晶片,速度更快。這是我們設計下一代英偉達晶片的關鍵部分,並幫助我們建立專為AI工作負載優化的晶片。

10.英偉達市值的激增是因為我們能夠將公司轉型為AI公司。我們從一開始是GPU公司,但我們已經轉型成AI計算公司,這項轉型是我們市值成長的關鍵部分。AI技術的需求正在迅速成長,我們處在一個能夠滿足這項需求的有利位置。

11.實施例AI是指將AI與物理世界結合。透過這種方式,AI不僅可以在虛擬環境中進行任務處理,還能在現實世界中進行決策並執行任務。實施例AI將推動智慧硬體、自動駕駛等技術的快速發展。

12.AI不僅僅是工具,它也可以成為‘虛擬員工’,幫助提升工作效率。AI能夠在數據處理、程式設計、決策等領域替代或輔助人類工作,進而改變整個勞動市場和工作方式。

13.AI將在科學與工程領域產生巨大影響,特別是在藥物研發、氣候研究、物理實驗等領域。AI將有助於科學家處理大量數據,揭示新的科學規律,並加速創新。它還將在工程領域優化設計,提高效率,推動更具創新性的技術發展。

14.我自己也在日常工作中使用AI工具,來提高效率和創造力。我認為,AI不僅能夠幫助我們處理複雜的數據和決策任務,還能提升我們的創意思維與工作效率,成為每個人工作中不可或缺的一部分。

以下是採訪文字實錄全文,由AI翻譯:

主持人:Welcome back, Johnson, 30 years in to Nvidia and looking 10 years out, what are the big bets you think are still to make? Is it all about scale up from here? Are we running into limitations in terms of how we can squeeze more compute memory out of the architectures we have? What are you focused on? Well.

嗨,Johnson,歡迎回來!你在英偉達工作了30年,展望未來10年,你認為還有哪些重要的投資機會?是不是說我們只需要繼續擴大規模?我們在現有架構中是否會遇到限制,無法再擠出更多的計算內存?你目前關注的重點是什麼?

黃仁勳:If we take a step back and think about what we've done, we went from coding to machine learning, from writing software tools to creating AIs and all of that running on CPUs that was designed for human coding to now running on GPUs designed for AI coding, basically machine learning. And so the world has changed the way we do computing the whole stack has changed. And as a result, the scale of the problems we could address has changed a lot because we could, if you could paralyze your software on one GPU, you've set the foundations to paralyze across a whole cluster or maybe across multiple clusters or multiple data centers. And so I think we've set ourselves up to be able to scale computing at a level and develop software at a level that nobody's ever imagined before. And so we're at the beginning that over the next 10 years, our hope is that we could double or triple performance every year at scale, not at chip, at scale. And to be able to therefore drive the cost down by a factor of 2 or 3, drive the energy down by a factor of 2,3 every single year. When you do that every single year, when you double or triple every year in just a few years, it adds up. So it compounds really aggressively. And so I wouldn't be surprised if, you know, the way people think about Moore's Law, which is 2 x every couple of years, you know, we're gonna be on some kind of a hyper Moore's Law curve. And I fully hope that we continue to do that. Well, what.

以前我們程式設計都是靠自己寫程式碼,現在我們開始讓機器自己學習,自己寫程式碼。以前我們用的那種電腦晶片(CPU)是給人寫程式碼用的,現在我們使用的電腦晶片(GPU)是給機器學習用的。因為這些變化,我們現在處理問題的方式和以前完全不一樣了。打個比方,如果你能讓一個機器學習程式在一個GPU上運行,那你就可以讓它在整個電腦群組裡,甚至在很多電腦群或資料中心運行。這意味著我們現在能處理的問題比以前大多了。所以,我們相信自己已經建立了能夠大規模擴展運算能力和開發軟體的基礎,這個規模是以前沒人想像過的。

我們希望在未來10年裡,每年都能讓運算能力翻兩倍或三倍,不是單一晶片的能力,而是整體的能力。這樣的話,我們就能每年把運算成本降低兩倍或三倍,把能耗也降低兩倍或三倍。這種增長如果每年都能實現,那麼幾年下來,這個成長會非常驚人。因此,我認為未來的運算將會超越傳統的“摩爾定律”(即每兩年計算能力翻倍),可能會走上一條更快的成長曲線,我也非常希望能夠繼續沿著這個方向前進。

主持人:Do you think is the driver of making that happen even faster than Morse law? Cuz I know morezo was sort of self reflexive, right? It was something that he said and then people kind of implemented it to me to happen.

你認為是什麼因素推動了運算能力成長速度超過摩爾定律的?因為我知道,摩爾定律本身就是一種“自我實現”的規律,對吧?也就是說,摩爾定律本身是摩爾提出的,然後大家就照這個規律去做,結果它就實現了。

黃仁勳:Yep, too. Fundamental technical pillars. One of them was Denard scaling and the other one was Carver Mead's VLSI scaling. And both of those techniques were rigorous techniques, but those techniques have really run out of steam. And, and so now we need a new way of doing scaling. You know, oBVIously the new way of doing scaling are all kinds of things associated with co design. Unless you can modify or change the algorithm to reflect the architecture of the system or change and then change the system to reflect the architecture of the new software and go back and forth. Unless you can control both sides of it, you have no hope. But if you can control both sides of it, you can do things like

move from FP64 to FP32 to BF16 to FPA to, you know, FP4 to who knows what, right? And so, and so I think that code design is a very big part of that. The second part of it, we call it full stack. The second part of it is data center scale. You know, unless you could treat the network as a compute fabric and push a lot of the work into the network, push a lot of the work into the fabric. And as a result, you're compressing, you know, doing compressing at very large scales. And so that's the reason why we bought Melanox and started fusing infinite and MV Link in such an aggressive way.

過去推動技術進步的兩個關鍵技術柱子是Denard縮放(Denard Scaling)和Carver Mead的VLSI縮放。但是這兩種方法現在都不太管用了,我們需要新的方法來變得更快。

新方式就是“協同設計”(co-design),也就是軟體和硬體必須同時考慮和最佳化。具體來說,如果你不能修改或調整演算法,使其與系統的架構匹配,或不能改變系統架構,以適應新軟體的需求,那就沒有希望。但如果你能同時控制軟體和硬體,你就能做很多新的事情,比如:從高精度的FP64轉到低精度的FP32,再到BF16、FPA、甚至FP4等更低精度的計算。

這就是為什麼“協同設計”這麼重要的原因。另外,另一個重要的部分是全端設計。這意味著,你不僅要考慮硬體,還要考慮資料中心層級的規模。比如,必須把網路當作一個運算平台來使用,把大量的運算任務推到網路裡,利用網路和硬體進行大規模壓縮運算。

因此,我們收購了Mellanox,並開始非常積極地推動InfiniBand和NVLink這類高速連接技術,來支援這種全新的大規模運算架構。

And now look where MV Link is gonna go. You know, the compute fabric is going to, I scale out what appears to be one incredible processor called a GPU. Now we get hundreds of GPUs that are gonna be working together.And now look where MV Link is gonna go. You know, the compute fabric is going to, I scale out what appears to be one incredible processor called a GPU. Now we get hundreds of GPUs that are gonna be working together.You know, most of these computing challenges that we're dealing with now, one of the most exciting ones, of course, is inference time scaling, has to do with essentially generating tokens at incredibly low latency because you're self reflecting, as you just mentioned. I mean, you're gonna be doing tree surge, you're gonna be doing chain of thought, you're gonna be doing probably some amount of simulation in your head. You're gonna be reflecting on your own answers. Well, you're gonna be prompting yourself and generating text to your in, you know, silently and still respond hopefully in a second. Well, the only way to do that is if your latency is extremely low.Meanwhile, the data center is still about producing high throughput tokens because you know, you still wanna keep cost down, you wanna keep the throughput high, you wanna, right, you know, and generate a return. And so these two fundamental things about a factory, low latency and high throughput, they're at odds with each other. And so in order for us to create something that is really great in both, we have to go invent something new, and Envy Link is really our way of doing that.We now you have a virtual GPU that has incredible amount of flops because you need it for context. You need a huge amount of memory, working memory, and still have incredible bandwidth for token generation all of the same time.

現在看NVLink(英偉達的高速連接技術)將走向哪裡,未來的運算架構將會變得非常強大。你可以把它想像成一個超強大的處理器,就是GPU(圖形處理單元)。而現在,英偉達的目標是把數百個GPU整合到一起,協同工作,形成一個龐大的運算平台。

現在我們面臨的計算挑戰中,有一個非常令人興奮的問題就是推理時間的縮短。特別是在生成文字時,需要非常低的延遲。因為就像你剛才提到的,我們的思維其實是一種自我反思的過程:你可能在腦海中進行“樹形搜尋”(tree search)、思考鏈條(chain of thought),甚至可能會進行某種模擬,回顧自己的答案。你會自己給自己提問,並產生答案,在大腦裡“默默地”思考,然後希望能在幾秒鐘內回應出來。

為了做到這一點,計算的延遲必須非常低,因為你不可能等太久才能得到結果。

但同時,資料中心的任務是產生大量的高吞吐量的“token”(符號)。你需要控製成本,保持高吞吐量,並且確保能夠獲得回報。所以,低延遲和高吞吐量是兩個相互矛盾的目標:低延遲要求快速回應,而高吞吐量則需要處理更多的數據。這兩者之間存在衝突。

為了同時做到這兩點,必須創造一些全新的技術,而NVLink就是我們解決這個問題的方法之一。透過NVLink,英偉達希望能夠在確保高吞吐量的同時,也能提供低延遲,從而解決這一計算上的矛盾,提升整體效能。

現在我們有了虛擬GPU,它的運算能力非常強大,因為我們需要這麼強的運算能力來處理上下文。也就是說,當我們在處理一些任務時,需要非常大的內存(特別是工作內存),同時還要有極高的頻寬來生成token(即文字或資料符號)。

主持人:Building the models, actually also optimizing things pretty dramatically like David and my team pull data where over the last 18 months or so, the cost of 1 million tokens going into a GPT four equivalent model is basically dropped 240 x. Yeah, and so there's just massive optimization and compression happening on that side as.

建構模型的過程其實也包含了許多優化工作,比如David和他的團隊,透過過去18個月的努力,成功地將每百萬個token的成本(用於GPT-4類模型的成本)降低了240倍。

黃仁勳:Well. Just in our layer, just on the layer that we work on. You know, one of the things that we care a lot about, of course, is the ecosystem of our stack and the productivity of our software. You know, people forget that because you have Kuda Foundation and that's a solid foundation. Everything above it can change. If everything, if the foundation's changing underneath you, it's hard to build a building on top. It's hard to create anything and interesting on top. And so could have made it possible for us to iterate so quickly just in the last year. And then we just went back and benchmarked when Lama first came out, we've improved the performance of Hopper by a factor of five without the algorithm, without the layer on top ever changing. Now, well, a factor of five in one year is impossible using traditional computing approaches. But it's already computing and using this way of code design, we're able to explain all kinds of new things.

在我們的工作領域裡,有一件非常重要的事情,就是技術堆疊的生態系統和軟體的生產力。我們特別重視的是Kuda Foundation這個基礎平台,它是非常穩定和堅實的。因為如果基礎平台不斷變化,想要在上面建立一個系統或應用就非常困難,根本無法在不穩定的基礎上創造出有趣的東西。所以,Kuda Foundation的穩定性讓我們能夠非常快速地進行迭代和創新,尤其是在過去一年裡。

然後,我們還做了一個比較測試:當Lama首次推出時,我們透過優化Hopper(一種運算平台或架構),在不改變演算法、不改變上層架構的情況下,提升了性能5倍。而且這種5倍的提升,在傳統的計算方法下是幾乎不可能實現的。但透過協同設計這種新的方法,我們能夠在現有的基礎上不斷創新和解釋更多新的技術可能性。

主持人:How much are, you know, your biggest customers thinking about the interchangeability of their infrastructure between large scale training and inference?

你的那些最大客戶有多關心他們在大規模訓練和推理之間基礎設施的互換性?

黃仁勳:Well, you know, infrastructure is disaggregated these days. Sam was just telling me that he had decommissioned Volta just recently. They have pascals, they have amperes, all different configurations of blackwall coming. Some of it is optimized for air cool, some of it's optimized liquid cool. Your services are gonna have to take advantage of all of this. The advantage that Nvidia has, of course, is that the infrastructure that you built today for training will just be wonderful for inference tomorrow. And most of Chat GBT, I believe, are inferenced on the same type of systems that we're trained on just recently. And so you can train on, you can inference on it. And so you're leaving a trail of infrastructure that you know is going to be incredibly good at inference, and you have complete confidence that you can then take that return on it, on the investment that you've had and put it into a new infrastructure to go scale with, you know you're gonna leave behind something of use and you know that Nvidia and the rest of the ecosystem are gonna be working on improving the algorithm so that the rest of your infrastructure improves by a factor of five, you know, in just a year. And so that motion will never change.

現在的基礎設施不像以前那樣是一成不變的了。比如Sam剛告訴我,他們最近淘汰了Volta型號的設備。他們有Pascal型號的,有Ampere型號的,還有很多不同配置的Blackwall型號即將到來。有些設備是優化了空氣冷卻的,有些則是優化了液體冷卻的。你們的服務需要能夠利用所有這些不同的設備。

英偉達的優勢在於,你今天為訓練搭建的基礎設施,將來會非常適合用於推理。我相信大多數的Chat GBT(可能是指大型語言模型)都是在最近訓練過的相同類型的系統上進行推理的。所以你可以在這個系統上訓練,也可以在這個系統上進行推理。這樣,你就留下了一條基礎建設的軌跡,你知道這些基礎設施將來會非常適合進行推理,你完全有信心可以把之前投資的回報,投入到新的基礎設施去,擴大規模。你知道你會留下一些有用的東西,而且你知道英偉達和整個生態系統都在努力改進演算法,這樣你的其他基础设施在仅仅一年内就能提高五倍的效率。所以這種趨勢是不會改變的。

And so the way that people will think about the infrastructures, yeah, even though I built it for training today, it's gotta be great for training. We know it's gonna be great for inference. Inference is gonna be multi scale. 說話人 2 08:53 I mean, you're gonna take, first of all, in order to, the still smaller models could have a larger model that's still from and so you're still gonna create these incredible a frontier models. They're gonna be used for, of course, the groundbreaking work. You're gonna use it for synthetic data generation. You're gonna use the models, the big models that teach smaller models and distill down to smaller models. And so there's a whole bunch of different things you can do, but in the end, you're gonna have giant models all the way down to little tiny models. The little tiny models are gonna be quite effective, you know, not as generalizable, but quite effective. And so, you know, they're gonna perform very specific stunts incredibly well that one task. And we're gonna see superhuman task in one little tiny domain from a little tiny model. Maybe you know, it's not a small language model, but you know, tiny language model, TLMs are, you know, whatever. Yeah, so I think we're gonna see all kinds of sizes and we hope isn't right, just kind of like softwares today.

人們看待基礎設施的方式在變,就像我現在蓋的這個設施雖然是為了訓練用的,但它也必須很適合訓練。我們知道它將來也會非常適合做推理。推理會有很多不同的規模。

我是說,你會有各種不同大小的模型。小模型可以從大模型學習,所以你還是會創造一些前衛的大模型。這些大模型會用來做開創性的工作,用來產生合成數據,用來教小模型,然後把知識蒸餾給小模型。所以你可以做的事情有很多,但最後你會有從巨大的模型到非常小的模型。這些小模型將會非常有效,雖然它們不能通用,但在特定任務上會非常有效。它們會在某個特定任務上表現得非常好,我們將會看到在某個小小的領域裡,小模型能完成超乎人類的任務。也許它不是一個小型的語言模型,但你知道,就是微型語言模型,TLMs,反正就是類似的東西。所以我覺得我們會看到各種大小的模型,就像現在的軟體一樣。

Yeah, I think in a lot of ways, artificial intelligence allows us to break new ground in how easy it is to create new applications. But everything about computing has largely remained the same. For example, the cost of maintaining software is extremely expensive. And once you build it, you would like it to run on a large of an install base as possible. You would like not to write the same software twice. I mean, you know, a lot of people still feel the same way. You like to take your engineering and move them forward. And so to the extent that, to the extent that the architecture allows you, on one hand, create software today that runs even better tomorrow with new hardware that's great or software that you create tomorrow, AI that you create tomorrow runs on a large install base. You think that's great. That way of thinking about software is not gonna.

我覺得在很多方面,人工智慧讓我們更容易創造新的應用程式。但是在計算方面,大部分事情還是老樣子。比如說,維護軟體的成本非常高。一旦你建立了軟體,你希望它能在盡可能多的設備上運行。你不想重複寫同樣的軟體。我的意思是,很多人還是這麼想的。你喜歡把你的工程推向前進。所以,如果架構允許你,一方面,今天創建的軟體明天在新硬體上能運作得更好,那就太好了;或者你明天創建的軟體,後天創建的人工智慧能在許多設備上運行。你認為那很棒。這種考慮軟體的方式是不會改變的。

主持人:Change. And video has moved into larger and larger, let's say, like a unit of support for customers. I think about it going from single chip to, you know, server to rack and real 72. How do you think about that progression? Like what's next? Like should Nvidia do you full data center? But

隨著科技的發展,英偉達的產品已經不只是單一的晶片了,而是擴展到了支援整個資料中心的規模。你怎麼看待這種發展?接下來會是什麼?比如,英偉達是不是應該做整個資料中心?

黃仁勳:In fact, we build full data centers the way that we build everything. Unless you're building, if you're developing software, you need the computer in its full manifestation. We don't build Powerpoint slides and ship the chips and we build a whole data center. And until we get the whole data center built up, how do you know the software works until you get the whole data center built up, how do you know your, you know, your fabric works and all the things that you expected the efficiencies to be, how do you know it's gonna really work at scale? And that's the reason why it's not unusual to see somebody's actual performance be dramatically lower than their peak performance, as shown in Powerpoint slides, and it is, computing is just not used to, is not what it used to be. You know, I say that the new unit of computing is the data center. That's to us. So that's what you have to deliver. That's what we build.Now we build a whole thing like that. And then we, for every single thing that every combination, air cold, x 86, liquid cold, Grace, Ethernet, infinite band, MV link, no NV link, you know what I'm saying? We build every single configuration. We have five supercomputers in our company today. Next year, we're gonna build easily five more. So if you're serious about software, you build your own computers if you're serious about software, then you're gonna build your whole computer. And we build it all at scale.

實際上,我們建造完整的資料中心就像我們建造其他所有東西一樣。如果你在開發軟體,你需要電腦的完整形態來測試。我們不只是做PPT幻燈片然後發貨晶片,我們建造整個資料中心。只有當我們把整個資料中心搭建起來後,你才能知道軟體是否正常運作,你的網路佈線是否有效,所有你期望的效率是否都能達到,你才知道它是否真的能在大規模上運行。這就是為什麼人們的實際表現通常遠低於PPT幻燈片上展示的峰值性能,計算已經不再是過去的樣子了。我說現在的計算單元是資料中心,對我們來說就是這樣。這就是你必須交付的東西,也是我們建造的東西。

我們現在就這樣建造整個系統。然後我們為每一種可能的組合建造:空氣冷卻、x86架構、液體冷卻、Grace晶片、乙太網路、無限頻寬、MVLink,沒有NVLink,你懂我的意思嗎?我們建造每一種配置。我們公司現在有五台超級計算機,明年我們輕易就能再建造五台。所以,如果你對軟體是認真的,你就會自己建造計算機,如果你對軟體是認真的,你就會建造整台計算機。我們都是大規模地建造。

This is the part that is really interesting. We build it at scale and we build it very vertically integrate. We optimize it full stack, and then we disagree everything and we sell lemon parts. That's the part that is completely, utterly remarkable about what we do. The complexity of that is just insane. And the reason for that is we want to be able to graft our infrastructure into GCP, AWS, Azure, OCI. All of their control planes, security planes are all different and all of the way they think about their cluster sizing all different. And, but yet we make it possible for them to all accommodate Nvidia's architecture. So that could, it could be everywhere. That's really in the end the singular thought, you know, that we would like to have a computing platform that developers could use that's largely consistent, modular, you know, 10% here and there because people's infrastructure are slightly optimized differently and modular 10% here and there, but everything they build will run everywhere. This is kind of the one of the principles of software that should never be given up. And it, and we protected quite dearly. Yeah, it makes it possible for our software engineers to build ones run everywhere. And that's because we recognize that the investment of software is the most expensive investment, and it's easy to test.

這部分真的很有趣。我們不僅大規模建造,而且是垂直整合建造。我們從底層到頂層全程優化,然後我們把各個部分分開,單獨賣。我們所做的事情複雜得令人難以置信。為什麼要這麼做呢?因為我們想把我們的基礎設施融入到GCP、AWS、Azure、OCI這些不同的雲端服務提供者中。我們的控制平台、安全平台都不一樣,我們考慮集群大小的方式也各不相同。但是,我們還是想辦法讓他們都能適應英偉達的架構。這樣,我們的架構就能無所不在。

最終,我們希望有一個運算平台,開發者可以用它來建立軟體,這個平台在大部分情況下是一致的,可以模組化地調整,可能這裡那裡有10%的不同,因為每個人的基礎設施都略有優化差異,但是無論在哪裡, 我們建造的東西都能運行。這是軟體的一個原則,我們非常珍惜這一點。這使得我們的軟體工程師可以建構出到處都能運作的軟體。這是因為我們認識到,軟體的投資是最昂貴的投資,而且它很容易測試。

Look at the size of the whole hardware industry and then look at the size of the world's industries. It's $100 trillion on top of this one trillion dollar industry. And that tells you something.The software that you build, you have to, you know, you basically maintain for as long as you shall live. We've never given up on piece of software. The reason why Kuda is used is because, you know, I called everybody. We will maintain this for as long as we shall live. And we're serious now. We still maintain. I just saw a review the other day, Nvidia Shield, our AndROId TV. It's the best Android TV in the world. We shifted seven years ago. It is still the number one Android TV that people, you know, anybody who enjoys TV. And we just updated the software just this last week and people wrote a new story about it. G Force, we have 300 million gamers around the world. We've never stranded a single one of them. And so the fact that our architecture is compatible across all of these different areas makes it possible for us to do it. Otherwise, we would be sub, we would be, we would have, you know, we would have software teams that are hundred times the size of our company is today if not for this architectural compatibility. So we're very serious about that, and that translates to benefits the developers.

看看整個硬體產業的規模,再比比全世界所有產業的規模。硬體產業只有一兆美元,而全世界的產業加起來有一百萬億美元。這個對比告訴你,軟體產業比硬體產業大得多。

你們做的軟體,基本上要一直維持下去。我們從來沒有放棄任何一款軟體。Kuda之所以被大家用,是因為我向所有人承諾,我們會一直維護它,只要我們還在。我們現在還是很認真的,我們還在維護它。我前幾天還看到一篇評論,說我們的英偉達Shield,我們的安卓電視,是世界上最好的安卓電視。我們在七年前推出的,它仍然是排名第一的安卓電視,任何喜歡看電視的人都愛它。我們上週才更新了軟體,然後人們就寫了新的文章來評論它。我們的G Force,全世界有3億玩家。我們從來沒有拋棄過他們中的任何一個。我們的架构在所有这些不同领域都是兼容的,這使得我們能做到這一點。如果不是因为我們的架构兼容性,否則我們今天的軟體團隊的規模會比現在公司大一百倍。所以我們非常重視這一點,這也為開發者帶來了好處。

主持人:One impressive substantiation of that recently was how quickly brought up a cluster for X dot AI. Yeah, and if you want to check about that, cuz that was striking in terms of both the scale and the speed with what you did. That

最近有一個令人印象深刻的例子是我們為X dot AI迅速搭建了一個集群。如果你想了解這件事,因為它在規模和速度上都讓人驚訝。我們很快就完成了這個任務。

黃仁勳:You know, a lot of that credit you gotta give to Elon. I think the, first of all, to decide to do something, select the site. I bring cooling to it. I power hum and then decide to build this hundred thousand GPU super cluster, which is, you know, the largest of its kind in one unit. And then working backwards, you know, we started planning together the date that he was gonna stand everything up. And the date that he was gonna stand everything up was determined, you know, quite, you know, a few months ago. And so all of the components, all the Oems, all the systems, all the software integration we did with their team, all the network simulation we simulate all the network configurations, we, we pre, I mean like we prestaged everything as a digital twin. We, we pres, we prestaged all of his supply chain. We prestaged all of the wiring of the networking. We even set up a small version of it. Kind of a, you know, just a first instance of it. You know, ground truth, if you reference 0, you know, system 0 before everything else showed up. So by the time that everything showed up, everything was staged, all the practicing was done, all the simulations were done.

這裡得給埃隆·馬斯克很多功勞。首先,他決定要做這件事,選了地方,解決了冷卻和供電問題,然後決定要建造這個十萬GPU的超級電腦群,這是迄今為止這種類型中最大的一個。然後,我們開始倒推,就是說,我們幾個月前就一起規劃了他要讓一切運作起來的日期。所以,所有的組件、所有的原始設備製造商、所有的系統、所有的軟體集成,我們都是和他們的團隊一起做的,所有的網路配置我們都模擬了一遍,我們預先準備,就像數位孿生一樣,我們預先準備了所有的供应链,所有的網路佈線。我們甚至建造了一個小版本,就像是第一個實例,你懂的,就是所有東西到位之前的基準,你參考的0號系統。所以,當所有東西都到位的時候,一切都已經安排好了,所有的練習都做完了,所有的模擬也都完成了。

And then, you know, the massive integration, even then the massive integration was a Monument of, you know, gargantuan teams of humanity crawling over each other, wiring everything up 247. And within a few weeks, the clusters were out. I mean, it's, it's really, yeah, it's really a testament to his willpower and how he's able to think through mechanical things, electrical things and overcome what is apparently, you know, extraordinary obstacles. I mean, what was done there is the first time that a computer of that large scale has ever been done at that speed. Unless our two teams are working from a networking team to compute team to software team to training team to, you know, and the infrastructure team, the people that the electrical engineers today, you know, to the software engineers all working together. Yeah, it's really quite a fit to watch. Was.

然後,你知道,大規模的整合工作,即使這個整合工作本身也是個巨大的工程,需要大量的團隊成員像螞蟻一樣辛勤工作,幾乎是全天候不停地接線和設置。幾週之內,這些計算機群就建成了。這真的是對他意志力的證明,也顯示了他如何在機械、電氣方面思考,並克服了顯然是非常巨大的障礙。我的意思是,這可是第一次在這麼短的時間內建成如此大規模的電腦系統。這需要我們的網路團隊、計算團隊、軟體團隊、訓練團隊,以及基礎建設團隊,也就是那些電機工程師、軟體工程師,所有人一起合作。這真的挺壯觀的。這就像是一場大型的團隊協作,每個人都在努力確保一切順利運行。

主持人:There a challenge that felt most likely to be blocking from an engineering perspective, active, just.

從工程角度來看,有沒有哪個挑戰最可能成為絆腳石,就是說,有沒有哪個技術難題最可能讓整個專案卡住,動彈不得?

黃仁勳:A tonnage of electronics that had to come together. I mean, it probably worth just to measure it. I mean, it's a, you know, it tons and tons of equipment. It's just abnormal. You know, usually a supercomputer system like that, you plan it for a couple of years from the moment that the first systems come on, come delivered to the time that you've probably submitted everything for some serious work. Don't be surprised if it's a year, you know, I mean, I think that happens all the time. It's not abnormal. Now we couldn't afford to do that. So we created, you know, a few years ago, there was an initiative in our company that's called Data Center as a product. We don't sell it as a product, but we have to treat it like it's a product. Everything about planning for it and then standing it up, optimizing it, tuning it, keep it operational, right? The goal is that it should be, you know, kind of like opening up your beautiful new iPhone and you open it up and everything just kind of works.

我們需要把大量的電子設備整合在一起。我的意思是,這些設備的量多到值得去稱一稱。有數噸又數噸的設備,這太不正常了。通常像這樣的超級電腦系統,從第一個系統開始交付,到你把所有東西都準備好進行一些嚴肅的工作,你通常需要規劃幾年時間。如果這個過程需要一年,你要知道,這是常有的事,並不奇怪。

但現在我們沒有時間這麼做。所以幾年前,我們公司裡有一個叫做“資料中心即產品”的計劃。我們不把它當作產品來賣,但我們必須像對待產品一樣對待它。從規劃到建立,再到優化、調整、保持運行,所有的一切都是為了確保它能夠像打開一部嶄新的iPhone一樣,一打開,一切都能正常運作。我們的目標就是這樣。

Now, of course, it's a miracle of technology making it that, like that, but we now have the skills to do that. And so if you're interested in a data center and just have to give me a space and some power, some cooling, you know, and we'll help you set it up within, call it, 30 days. I mean, it's pretty extraordinary.

當然了,能這麼快就把資料中心建好,這簡直就是科技的奇蹟。但現在我們已經有了這樣的技術能力。所以如果你想要建造一個資料中心,只需要給我一個地方,提供一些電力和冷凍設備,我們就能在差不多30天內幫你把一切都搭建好。我的意思是,這真的非常了不起。

主持人:That's wild. If you think, if you look ahead to 200,000,500,000, a million in a super cluster, whatever you call it. At that point, what do you think is the biggest blocker? Capital energy supply in one area?

那真是厲害。如果你想想,要是將來有個超大的電腦集群,裡面有個二十萬、五十萬、甚至一百萬的計算機,不管你叫它什麼。到那個時候,你覺得最大的難題會是什麼呢?是資金問題、能源供應問題,還是別的什麼?

黃仁勳:Everything. Nothing about what you, just the scales that you talked about, though, nothing is normal.

你說的那些事情,不管是哪個方面,只要涉及到你提到的那些巨大規模,那就沒有一件事情是正常的。

主持人:But nothing is impossible. Nothing.

但是,也沒什麼事是完全不可能的。啥事都有可能。

黃仁勳:Is, yeah, no laws of physics limits, but everything is gonna be hard. And of course, you know, I, is it worth it? Like you can't believe, you know, to get to something that we would recognize as a computer that so easily and so able to do what we ask it to do, what, you know, otherwise general intelligence of some kind and even, you know, even if we could argue about is it really general intelligence, just getting close to it is going to be a miracle. We know that. And so I think the, there are five or six endeavors to try to get there. Right? I think, of course, OpenAI and anthropic and X and, you know, of course, Google and meta and Microsoft and you know, there, this frontier, the next couple of clicks that mountain are just so vital. Who doesn't wanna be the first on that mountain. I think that the prize for reinventing intelligence altogether. Right. It's just, it's too consequential not to attempt it. And so I think there are no laws of physics. Everything is gonna be hard.

確實,沒有物理定律說我們做不到,但每件事情都會很難。你也知道,這值得嗎?你可能覺得難以置信,我們要達到的那種電腦,能夠輕鬆地做我們讓它做的事情,也就是某種通用智能,就算我們能爭論它是否真的是通用智能,接近它都會是個奇蹟。我們知道這很難。所以我認為,有五、六個團隊正在嘗試達到這個目標。對吧?比如說,OpenAI、Anthropic、X,還有谷歌、Meta和微軟等等,他們都在努力攀登這個前沿科技的山峰。誰不想成為第一個登頂的人呢?我認為,重新發明智能的獎勵是如此之大,它的影響太大了,我們不能不去嘗試。所以,雖然物理定律上沒有限制,但每件事都會很難。

主持人:A year ago when we spoke together, you talked about, we asked like what applications you got most excited about that Nvidia would serve next in AI and otherwise, and you talked about how you led to, your most extreme customers sort of lead you there. Yeah, and about some of the scientific applications. So I think that's become like much more mainstream of you over the last year. Is it still like science and AI's application of science that most excites you?

一年前我們聊天時,我問你,你對英偉達接下來在AI和其他領域能服務的哪些應用最興奮,你談到了你的一些最極端的客戶某種程度上引導了你。是的,還有關於一些科學應用的討論。所以我覺得過去一年裡,這些科學和AI的應用變得更主流了。現在,是不是仍然是科學以及AI在科學領域的應用讓你最興奮?

黃仁勳:I love the fact that we have digital, we have AI chip designers here in video. Yeah, I love that. We have AI software engineers. How.

我就直說了,咱們現在有數位版的,也就是用人工智慧設計晶片的設計師,就在影片裡。是的,我喜歡這個。我們還有AI軟體工程師。

主持人:Effective our AI chip designers today? Super.

我們今天用人工智慧來設計晶片的效果怎麼樣?非常好。

黃仁勳:Good. We can't, we couldn't build Hopper without it. And the reason for that is because they could explore a much larger space than we can and because they have infinite time. They're running on a supercomputer. We have so little time using human engineers that we don't explore as much of the space as we should, and we also can explore commentary. I can't explore my space while including your exploration and your exploration. And so, you know, our chips are so large, it's not like it's designed as one chip. It's designed almost like 1,000 ships and we have to ex, we have to optimize each one of them. Kind of an isolation. You really wanna optimize a lot of them together and, you know, cross module code design and optimize across much larger space. But obviously we're gonna be able to find fine, you know, local maximums that are hidden behind local minimum somewhere. And so clearly we can find better answers. You can't do that without AI. Engineers just simply can't do it. We just don't have enough time.

我們的AI晶片設計師真的很厲害。如果沒有它們,我們根本造不出Hopper這款晶片。因為它們所能探索的範圍比我們人類廣得多,而且它們好像有無窮無盡的時間。它們在超級電腦上運行,而我們人類工程師的時間有限,探索不了那麼大的範圍。而且,我們也不能同時探索所有的可能,我探索我的領域的時候,就不能同時探索你的領域。

我們的晶片非常大,不像是設計一個單獨的晶片,更像是設計1000個晶片,每個都需要優化。就像是個獨立的小島。但我們其實很想把它們放在一起優化,跨模組協同設計,在整個更大的空間中優化。顯然,我們能找到更好的解決方案,那些隱藏在某個角落的最好的選擇。沒有AI我們做不到這一點。工程師們就是時間不夠,做不到。

主持人:One other thing has changed since we last spoke collectively, and I looked it up at the time in videos, market cap was about 500 billion. It's now over 3 trillion. So the last 18 months, you've added two and a half trillion plus of market cap, which effectively is $100 billion plus a month or two and a half snowflakes or, you know, a stripe plus a little bit, or however you wanna think about.A country or two. Obviously, a lot of things are stayed consistent in terms of focus on what you're building and etc. And you know, walking through here earlier today, I felt the buzz like when I was at Google 15 years ago was kind of you felt the energy of the company and the vibe of excitement. What has changed during that period, if anything? Or how, what is different in terms of either how Nvidia functions or how you think about the world or the size of bets you can take or.

自從我們上次一起聊天以來,有一件事變了,我查了下,當時英偉達的市值大概是5000億美元。現在超過了3万億美元。所以在過去18個月裡,你们增加了两万五千億美元以上的市值,這相當於每個月增加了1000億美元,或者說增加了兩個半的Snowflake公司或一個Stripe公司多一點的市值,無論你怎麼想。

這相當於增加了一、兩個國家的市值。顯然,儘管市值增長了這麼多,你們在建造的東西和專注的領域上還是保持了一致性。你知道,今天我在這裡走了一圈,我感受到了一種活力,就像15年前我在谷歌時感受到的那樣,你能感覺到公司的能量和興奮的氛圍。在這段時間裡,有什麼變化了嗎?或者,英偉達的運作方式、你對世界的看法、你能承擔的風險大小等方面有什麼不同了嗎?

黃仁勳:Well, our company can't change as fast as a stock price. Let's just be clear about. So in a lot of ways, we haven't changed that much. I think the thing to do is to take a step back and ask ourselves, what are we doing? I think that's really the big, you know, the big observation, realization, awakening for companies and countries is what's actually happening. I think what we're talking about earlier, I'm from our industry perspective, we reinvented computing. Now it hasn't been reinvented for 60 years. That's how big of a deal it is that we've driven down the marginal cost of computing, down probably by a million x in the last 10 years to the point that we just, hey, let's just let the computer go exhaustively write the software. That's the big realization. 說話人 2 24:00 And that in a lot of ways, I was kind of, we were kind of saying the same thing about chip design. We would love for the computer to go discover something about our chips that we otherwise could have done ourselves, explore our chips and optimize it in a way that we couldn't do ourselves, right, in the way that we would love for digital biology or, you know, any other field of science.

我們公司的變化速度可沒有股價變動那麼快。所以這麼說吧,我們在很多方面並沒有太大變化。我認為重要的是要退一步來問我們自己,我們到底在做什麼。這真的是對公司和國家來說一個很大的觀察、認識和覺醒,那才是真正發生的事情。

就像我們之前討論的,從我們行業的角度來看,我們重新發明了計算。這可是60年來都沒有發生過的事情。我們把計算的邊際成本降低了,可能在過去10年裡降低了一百萬分之一,以至於我們現在可以讓電腦去詳盡地寫軟體。這是一個重大的領悟。

在很多方面,我們對晶片設計也是這麼說的。我們希望電腦能自己去發現我們晶片的一些東西,這些東西我們本來可以自己做,但計算機可以探索我們的晶片並以我們自己做不到的方式進行優化,就像我們希望在數位生物學或其他科學領域中那樣。

And so I think people are starting to realize when we reinvented computing, but what does that mean even, and as we, all of a sudden, we created this thing called intelligence and what happened to computing? Well, we went from data centers are multi tenant stores of files. These new data centers we're creating are not data centers. They don't, they're not multi tenant. They tend to be single tenant. They're not storing any of our files. They're just, they're producing something. They're producing tokens. And these tokens are reconstituted into what appears to be intelligence. Isn't that right? And intelligence of all different kinds. You know, it could be articulation of robotic motion. It could be sequences of amino acids. It could be, you know, chemical chains. It could be all kinds of interesting things, right? So what are we really doing? We've created a new instrument, a new machinery that in a lot of ways is that the noun of the adjective generative AI. You know, instead of generative AI, you know, it's, it's an AI factory. It's a factory that generates AI. And we're doing that at extremely large scale. And what people are starting to realize is, you know, maybe this is a new industry. It generates tokens, it generates numbers, but these numbers constitute in a way that is fairly valuable and what industry would benefit from it.

所以我覺得人們開始意識到,當我們重新發明計算時,這到底意味著什麼。突然間,我們創造了這個叫做智慧的東西,計算發生了什麼變化?嗯,我們以前把資料中心看作是多租戶儲存檔案的地方。我們現在創建的這些新資料中心,其實已經不是傳統意義上的資料中心了。它們往往是單一租戶的,它們不會儲存我們的文件,它們只是在生產一些東西。它們正在生產數據令牌。然後這些數據令牌重新組合成看起來像智慧的東西。對吧?而且智能有各種各樣的形式。可能是機器人動作的表達,可能是胺基酸序列,可能是化學物質鏈,可能是各種有趣的事情,對吧?所以我們到底在做什麼?我們創造了一種新的工具,一種新的機械,從很多方面來說,它就是生成性人工智慧的名詞形式。你知道,不是生成性人工智慧,而是人工智慧工廠。它是一個生產人工智慧的工廠。我們正在非常大規模地做這件事。人們開始意識到,這可能是新行業。它產生數據令牌,它產生數字,但這些數字以一種相當有價值的方式構成,哪些行業會從中受益。

Then you take a step back and you ask yourself again, you know, what's going on? Nvidia on the one hand, we reinvent a computing as we know it. And so there's $1 trillion of infrastructure that needs to be modernized. That's just one layer of it. The big layer of it is that there's, this instrument that we're building is not just for data centers, which we were modernizing, but you're using it for producing some new commodity. And how big can this new commodity industry be? Hard to say, but it's probably worth trillions. 說話人 2 26:18 And so that I think is kind of the viewers to take a step back. You know, we don't build computers anymore. We build factories. And every country is gonna need it, every company's gonna need it, you know, give me an example of a company who or industry as us, you know what, we don't need to produce intelligence. We got plenty of it. And so that's the big idea. I think, you know, and that's kind of an abstracted industrial view. And, you know, someday people realize that in a lot of ways, the semiconductor industry wasn't about building chips, it was building, it was about building the foundational fabric for society. And then all of a sudden, there we go. I get it. You know, this is a big deal. Isn't not just about chips.

然後你退一步,再問自己,到底發生了什麼事?Nvidia一方面,我們重新發明了我們所知的計算。所以有一萬億美元的基礎設施需要現代化。這只是其中一層。更大的一層是,我們正在建造的這個工具不僅僅是為了資料中心,我們正在現代化資料中心,而是你用它來生產一些新的商品。這個新商品產業能有多大?很難說,但可能價值數兆美元。

所以我認為這是觀眾需要退一步的地方。你知道,我們不再製造電腦了。我們製造工廠。每個國家都會需要它,每個公司都會需要它,給我一個不需要生產智慧的公司或產業的例子,你知道,我們有很多智能。所以這就是這個大主意。我認為,你知道,這是一種抽象的工業觀點。然後,有一天人們意識到,在很多方面,半導體產業不是關於製造晶片,它是關於為社會建立基礎結構。然後突然间,我們明白了。這不僅僅是關於晶片的大事。

主持人:How do you think about embodiment now?

你現在怎麼看待“體現”或者“具體化”這個概念?就是說,你怎麼考慮把智慧或人工智慧真正應用到實際的物理世界中,例如機器人或其他實體設備上?

黃仁勳:Well, the thing I'm super excited about is in a lot of ways, we've, we're close to artificial general intelligence, but we're also close to artificial general robotics. Tokens are tokens. I mean, the question is, can you tokenize it? You know, of course, tokenis, tokenizing things is not easy, as you guys know. But if you're able to tokenize things, align it with large language models and other modalities, if I can generate a video that has Jensen reaching out to pick up the coffee cup, why can't I prompt a robot to generate the token, still pick up the rule, you know? And so intuitively, you would think that the problem statement is rather similar for computer. And, and so I think that we're that close. That's incredibly exciting.

我現在非常興奮的一點是,我們在很多方面都快要實現通用人工智慧了,而且我們也快實現通用機器人技術了。資料令牌就是資料令牌。我的意思是,問題是,你能把它變成資料令牌嗎?當然,把東西變成資料令牌並不容易,你們知道這一點。但如果你能做到這一點,把它和大型語言模型和其他方式對齊,如果我能生成一個視頻,影片裡有Jensen伸手去拿咖啡杯,為什麼我不能提示一個機器人去產生資料令牌,實際上去拿起那個規則,你知道嗎?所以直觀上,你會認為這個問題對計算機來說相當相似。所以我認為我們已經很接近了。這非常令人興奮。

Now the, the two brown field robotic systems. Brown field means that you don't have to change the environment for is self driving cars. And with digital chauffeurs and body robots right between the cars and the human robot, we could literally bring robotics to the world without changing the world because we built a world for those two things. Probably not a coincidence that Elon spoke is then those two forms. So robotics because it is likely to have the larger potential scale. And and so I think that's exciting. But the digital version of it, I is equally exciting. You know, we're talking about digital or AI employees. There's no question we're gonna have AI employees of all kinds, and our outlook will be some biologics and some artificial intelligence, and we will prompt them in the same way. Isn't that right? Mostly I prompt my employees, right? You know, provide them context, ask him to perform a mission. They go and recruit other team members, they come back and work going back and forth. How's that gonna be any different with digital and AI employees of all kinds? So we're gonna have AI marketing people, AI chip designers, AI supply chain people, AIs, you know, and I'm hoping that Nvidia is someday biologically bigger, but also from an artificial intelligence perspective, much bigger. That's our future company. If.

現在有兩種“棕色地帶”機器人系統。“棕色地帶”意味著你不需要改變環境,例如自動駕駛汽車。有了數位司機和機器人助手在汽車和人類機器人之間,我們可以在不改變世界的情況下把機器人技術帶到世界上,因為我們為這兩樣東西建造了世界。埃隆·馬斯克可能不是偶然提到這兩種形式的。所以機器人技術因為可能有更大的潛在規模而令人興奮。而數位版的機器人也同樣令人興奮。你知道,我們談論的是數字或AI員工。毫無疑問,我們將擁有各種AI員工,我們的前景將是一些生物和一些人工智慧,我們將以相同的方式提示他們。不是嗎?大多數情況下,我提示我的員工,對吧?給他們提供上下文,讓他們執行任務。他們去招募其他團隊成員,他們回來工作,來回工作。這和各種數字和AI員工有什么不同呢?所以我們將有AI行銷人員,AI晶片設計師,AI供應鏈人員,AI,等等,我希望英偉達有一天在生物學上更大,同時從人工智慧的角度來看,也更大。這是我們未來公司的樣子。

主持人:We came back and talked to you year from now, what part of the company do you think would be most artificially intelligent?

如果我們一年後回來再和你聊聊,你覺得公司裡哪個部分會是最聰明的?

黃仁勳:I'm hoping it should sign.

我希望公司裡最重要的、最核心的部分能實現智慧化。

主持人:Okay. And most.

好的,然後繼續詢問。

黃仁勳:Important part. And the read. That's right. Because it because I should start where it moves the needle most also where we can make the biggest impact most. You know, it's such an insanely hard problem. I work with Sasina at synopsis and rude at cadence. I totally imagine them having synopsis chip designers that I can rent. And they know something about a particular module, their tool, and they train an AI to be incredibly good at it. And we'll just hire a whole bunch of them whenever we need, we're in that phase of that chip design. You know, I might rent a million synopsis engineers to come and help me out and then go rent a million Cadence engineers to help me out. And that, what an exciting future for them that they have all these agents that sit on top of their tools platform, that use the tools platform and other, and collaborate with other platforms. And you'll do that for, you know, Christian will do that at SAP and Bill will do that as service.

我認為最重要的部分應該是公司裡最能產生影響力的地方。他說,這個問題非常難,但他希望從最能推動公司發展的地方開始變得智慧化。他和Synopsys的Sasina和Cadence的Rude一起工作,他想像可以租用Synopsys的晶片設計師AI。這些AI對某個特定模組、工具非常了解,並且已經被訓練得非常擅長這方面的工作。當他們需要進行晶片設計的某個階段時,他們會租用一大批這樣的AI設計師。比如,他可能會租一百萬個Synopsys工程師AI來幫忙,然後再租一百萬個Cadence工程師AI來幫忙。我認為,對我們來說,有一個令人興奮的未來,因为我们有所有這些AI代理商,它們位於我們工具平台的頂部,使用這些工具平台,並與其他平台協作。SAP的Christian會這樣做,Bill會作為服務來做這件事。

Now, you know, people say that these Saas platforms are gonna be disrupted. I actually think the opposite, that they're sitting on a gold mine, that they're gonna be this flourishing of agents that are gonna be specialized in Salesforce, specialized in, you know, well, Salesforce, I think they call Lightning and SAP is about, and everybody's got their own language. Is that right? And we got Kuda and we've got open USD for Omniverse. And who's gonna create an AI agent? That's awesome. At open USD, we're, you know, because nobody cares about it more than we do, right? And so I think in a lot of ways, these platforms are gonna be flourishing with agents and we're gonna introduce them to each other and they're gonna collaborate and solve problems.

現在,有些人說這些基於網路的軟體服務平台(SaaS)將會被顛覆。但我其實認為恰恰相反,他們就像坐在金礦上一樣,將會有一個專業化的智慧代理(AI)的繁榮時期。這些智能代理將會專門針對Salesforce、SAP等平台進行最佳化。比如Salesforce有個叫做Lightning的平台,每個平台都有自己的語言和特點。我們有Kuda,還有為Omniverse準備的開放USD。誰會來創造這些AI代理呢?那將會是非常酷的事情。在開放USD方面,我們會來做,因為沒有人比我們更關心它,對吧?所以我認為在很多方面,這些平台將會因為這些智能代理而繁榮起來,我們會把它們互相介紹,它們將會協作並解決問題。

主持人:You see a wealth of different people working in every domain in AI. What do you think is under notice or that people that you want more entrepreneurs or engineers or business people could work on?

你覺得在人工智慧領域,有沒有什麼被忽略的地方,或者你希望更多的創業者、工程師或商業人士能關注和投入工作的領域?

黃仁勳:Well, first of all, I think what is misunderstood, and I misunderstood, maybe it may be underestimated, is the, the under the water activity, under the surface activity of groundbreaking science, computer science to science and engineering that is being affected by AI and machinery. I think you just can't walk into a science department anywhere, theoretical math department anywhere, where AI and machine learning and the type of work that we're talking about today is gonna transform tomorrow. If they are, if you take all of the engineers in the world, all of the scientists in the world and you say that the way they're working today is early indication of the future, because obviously it is. Then you're gonna see a tidal wave of gender to AI, a tidal wave of AI, a tidal wave machine learning change everything that we do in some short period of time.

首先,我認為可能被誤解或低估了的是,那些在水面下的、正在進行的、突破性的科學、計算機科學以及科學與工程活動,這些活動正受到人工智慧和機械的影響。如果你走進任何一個科學系,任何一個理論數學系,你會發現今天的人工智慧和機器學習的工作將改變明天。如果你把全世界所有的工程師、所有的科學家都看作是未來的早期跡象,因為顯然他們是,那麼你就會看到一股湧向人工智慧的潮流,一股人工智慧的潮流,一股機器學習改變我們所做的一切的潮流,這將在很短的時間內發生。

in some short period of time.ion. And to work with Alex and Elian and Hinton at at at in Toronto and Yan Lekun and of course, Andrew Ang here in Stanford. And, you know, I saw the early indications of it and we were fortunate to have extrapolated from what was observed to be detecting cats into a profound change in computer science and computing altogether. And that extrapolation was fortunate for us. And now, of course, we, we were so excited by, so inspired by it that we changed everything about how we did things. But that took how long? It took literally six years from observing that toy, Alex Net, which I think by today's standards will be considered a toy to superhuman levels of capabilities in object recognition. Well, that was only a few years. 說話人 2 33:40 Now what is happening right now, the groundswell in all of the fields of science, not one field of science left behind. I mean, just to be very clear. Okay, everything from quantum computing, the quantum chemistry, you know, every field of science is involved in the approaches that we're talking about. If we give ourselves, and they've been added for a couple to three years, if we give ourselves in a couple, two, three years, the world's gonna change. There's not gonna be one paper, there's not gonna be one breakthrough in science, one breakthrough in engineering, where generative AI isn't at the foundation of it. I'm fairly certain of it. And, and so I, I think, you know, there's a lot of questions about, you know, every so often I hear about whether this is a fad computer. You just gotta go back to first principles and observe what is actually happening.

就在很短的時間內,我們看到了科學領域的大浪潮,沒有一個科學領域被落下。我的意思是,每一件事都非常清楚。從量子計算到量子化學,你知道的,每個科學領域都涉及我們正在討論的方法。如果我們給自己,比如說,兩三年的時間,世界將會改變。不會有一篇科學論文,不會有科學突破,一項工程突破,不是以生成性人工智慧為基礎的。我對此相當確定。所以,我認為,你知道,有很多問題,時不時我聽到關於這是否是計算機的一時風尚。你只需要回到基本原則,觀察實際發生的事情。

人工智慧和機器學習的發展非常快,而且影響深遠。我在人工智慧領域有重大貢獻的科學家合作的經歷,例如多倫多的Alex Krizhevsky、Eliasmith、Hinton和史丹佛的Yan LeCun以及Andrew Ng。、從辨識貓咪的簡單任務到物體辨識能​​力的超人層次的發展,這個過程只​​花了幾年時間。我相信,在未來幾年內,每個科學領域的每項科學和工程突破都將以生成性人工智慧為基礎。鼓勵人們不要懷疑這是否只是一時的流行,而應該觀察實際發生的事情,基於事實來判斷。

The computing stack, the way we do computing has changed if the way you write software has changed, I mean, that is pretty cool. Software is how humans encode knowledge. This is how we encode our, you know, our algorithms. We encode it in a very different way. Now that's gonna affect everything, nothing else, whatever, be the same. And so I, I think the, the, I think I'm talking to the converted here and we all see the same thing. And all the startups that, you know, you guys work with and the scientists I work with and the engineers I work with, nothing will be left behind. I mean, this, we're gonna take everybody with us again.

計算的整個體系,也就是我們進行計算的方式,已經改變了,連我們寫軟體的方式也改變了。這意味著我們編碼知識的方法也改變了,這是一種全新的編碼方式。這將會改變一切,其他的事情都不會跟以前一樣了。他認為他在這裡是對已經認同這一點的人說話,大家都看到了同樣的趨勢。無論是他們合作的新創公司,還是他合作的科學家和工程師,所有人都將被這項變革所影響。他的意思是,這次變革將會帶領所有人一起前進。

主持人:I think one of the most exciting things coming from like the computer science world and looking at all these other fields of science is like I can go to a robotics conference now. Yeah, material science conference. Oh yeah, biotech conference. And like, I'm like, oh, I understand this, you know, not at every level of the science, but in the driving of discovery, it is all the algorithms that are.

計算機科學領域的一個最令人興奮的事情是,現在可以應用於所有其他科學領域。比如,他可以去機器人會議、材料科學會議、生物技術會議,他會發現自己能理解那些內容。雖然不是在每個科學領域的每個層面上都懂,但在推動發現方面,都是演算法在起作用。

黃仁勳:General and there's some universal unifying concepts.

對,有一些普遍統一的概念。

主持人:And I think that's like incredibly exciting when you see how effective it is in every domain.

我認為這非常令人興奮,當你看到演算法在每個領域都如此有效時。

黃仁勳:Yep, absolutely. And eh, I'm so excited that I'm using it myself every day. You know, I don't know about you guys, but it's my tutor now. I mean, I, I, I don't do, I don't learn anything without first going to an AI. You know? Why? Learn the hard way. Just go directly to an AI. I should go directly to ChatGPT. Or, you know, sometimes I do perplexity just depending on just the formulation of my questions. And I just start learning from there. And then you can always fork off and go deeper if you like. But holy cow, it's just incredible.

我絕對同意。我很興奮,因為我自己每天都在使用AI。不知你們怎麼樣,但AI已成為我的導師。我現在學任何東西都會先去問AI。為什麼?何必要費勁去學呢,直接去找AI就行了。例如他會直接去問ChatGPT,或根據問題的不同,有時他會去問Perplexity。他會從那裡開始學習,然後如果願意,可以深入研究。天哪,這真是太不可思議了。

And almost everything I know, I check, I double check, even though I know it to be a fact, you know, what I consider to be ground truth. I'm the expert. I'll still go to AI and check, make double check. Yeah, so great. Almost everything I do, I involve it.

我現在幾乎做任何事都會用到AI。哪怕是他知道的事實,就算是他是那個領域的專家,他也會用AI再檢查一遍。他覺得這樣很好,因為他幾乎所有的事情都會讓AI參與。

主持人:I think it's a great note to stop on. Yeah, thanks so much that time today.

這是個很好的結束話題。感謝大家今天的參與,時間到了。

黃仁勳:Really enjoyed it. Nice to see you guys.

我今天很開心見到大家。

本文源自《華爾街見聞》,作者房家瑤;FOREXBNB編輯:文文。