Showing posts with label TED Talk. Show all posts
Showing posts with label TED Talk. Show all posts

TED2023.5.9--- Imran Chaudhri:無屏可穿戴技術




Chat YouTube:
伊恩·庫利莫爾 (Ian Cullimore) 討論了技術的發展和人工智能的未來。作為 Apple 的設計師,他承認 iPhone 和 Apple Watch 等設備對我們生活的影響。然而,他認為未來在於為 AI 從頭開始構建的設備,並且是完全獨立的,而不是簡單地將屏幕移近我們的臉。他推出了一種無屏幕、無縫且具有感應功能的新型可穿戴設備和平台,讓人們能夠在融入周圍環境的同時獲得計算能力。Cullimore 相信,這種技術將為我們所有人打開一個充滿可能性的世界。


Q:當前可穿戴技術的局限性是什麼,為什麼需要改進?
Chat YouTube:
當前可穿戴技術的局限性. 包括智能手錶和 AR/VR 眼鏡等設備仍然需要屏幕作為用戶與世界之間的屏障。 此外,這些設備通常需要配套設備, 例如智能手機,並且不是完全獨立的。 這些設備可以收集和解釋的感官數據量也有限制。 需要改進這些限制,以創造一種更加無縫和傳感的可穿戴技術,可以在不破壞周圍環境的情況下改善人們的日常生活。


Q:人工智能在未來將扮演什麼角色,這將如何改變我們與技術互動的方式?
Chat YouTube:
根據成績單,人工智能 (AI) 有望成為設備設計下一次飛躍的驅動力。 隨著技術的進步,人工智能有可能使設備能夠收集和解釋來自我們周圍世界的大量感官數據。 這可以允許與技術進行更加無縫和自然的交互,因為人們不必依賴屏幕或鍵盤與設備進行通信。 相反,設備可以理解並響應語音命令、手勢,甚至情緒暗示。 人工智能還可以通過學習用戶的習慣和偏好來幫助個性化和改善用戶體驗。 總的來說,人工智能有可能使技術更加直觀並融入我們的日常生活。


Q:新型可穿戴設備和平台.將如何改善人們的日常生活,它將採用何種技術?
Chat YouTube:
新型可穿戴設備和平台完全是為人工智能 (AI) 從頭開始構建的,並且是完全獨立的。 您不需要智能手機或任何其他設備即可與之配對。 演講者聲稱這種體驗是“無屏幕的、無縫的和感知的”,允許人們在保持在周圍環境中的同時訪問計算的力量。



Q:無屏技術設備技術智能家居

Chat YouTube: 這種類似智能手環的設備可以與其他智能家居設備連接, 例如智能揚聲器、智能燈具、智能門鎖等,從而通過語音交互或手勢控制來操控智能家居,帶來更加智能、方便和人性化的家居體驗。 這樣的無屏技術也可能解決當前智能家居設備需要使用手機、平板等設備進行控制的問題,讓人們更加自然而然地與智能家居互動。


Q:無屏技術應用於可穿戴科技
Chat YouTube:
這種類似智能手環的設備可以集成各種傳感器、感知器和處理器,以收集和處理人體生理和環境數據,並提供相應的服務和應用。 例如,它可以監測睡眠、運動、心率等健康數據,並提供相應的指導和建議; 它可以通過語音交互或手勢控制來操控其他智能設備, 如智能手機、智能燈具、智能家電等; 同時, 由於無屏技術的特性,它不會干擾人們的視覺或削弱人們的專注力,從而更好地融入人們的日常生活中。因此,無屏技術可穿戴設備有著廣泛的應用潛力,可以幫助人們更加方便、自然地獲取和處理信息,並提供更好的用戶體驗。


:04 I spent 22 incredible years at Apple, 0:09 helping to design experiences and devices 0:13 ranging from the Mac 0:15 to the iPhone to the Apple Watch. 0:18 And as the power of compute increased, 0:21 the size of our computers or our devices decreased. 0:27 The desktop paved the way for extraordinary interconnectedness, 0:32 but it was stuck to your desk. 0:35 The laptop provided portability, 0:39 but you still had to be sitting down to use it. 0:43 And the smartphone evolved us into the modern, connected humans we are, 0:49 providing millions the ability to access the internet from our pockets. 0:55 And the smart watch was a window to that phone. 0:59 A companion device with a whole host of health insights, 1:05 all shrunk down to your wrist. 1:07 But what comes next? 1:11 Some believe AR/VR glasses like these are the answer, 1:16 but they merely move the screens we already have in our lives today 1:20 to being just millimeters away from our eyeballs. 1:24 A further barrier between you and the world. 1:29 And the future is not on your face. 1:33 In fact, in 2017, 1:35 the legendary tech journalist Walt Mossberg wrote in his final column 1:39 that he felt that soon, one day, technology would become invisible. 1:46 And that the computer would disappear. 1:50 And we agree. 1:51 (Ringing) 1:54 Sorry. 1:55 This is my wife. 1:58 I'm going to have to get this. 1:59 Hello? 2:01 Bethany Bongiorno: Hey, babe. 2:02 IC: Hey, Bethany. 2:03 How's it going? 2:05 BB: Good. Are you at TED? 2:07 IC: Yeah, I'm on the red circle right now, actually. 2:10 Bethany: Oh, great, good luck. 2:11 And don't forget to mention me. 2:13 (Laughter) 2:15 IC: I won't, babe, thank you. 2:18 Bethany: Love you. 2:19 IC: Love you, too. Bye. 2:22 It's going to get different in a minute. 2:24 (Applause) 2:27 So my wife, Bethany, and our entire company, Humane, 2:31 have been working to answer the question of what comes next. 2:36 And you may ask yourself, why? 2:39 Why would anybody do this? 2:42 It's because we love building technology 2:44 that genuinely makes people's lives better. 2:48 And we believe that artificial intelligence or AI 2:53 would be the driving force behind the next leap in device design. 2:59 And there is an incredible amount of stuff that's happening in this space. 3:04 Huge, huge advancements. 3:07 And even Bill Gates has said of OpenAI's GPT 3:12 that it's only the second most revolutionary technology demonstration 3:17 that he's seen in his entire lifetime. 3:20 But what do we do with all these incredible developments, 3:23 and how do we actually harness these to genuinely make our life better? 3:29 If we get this right, 3:31 AI will unlock a world of possibility for all of us. 3:36 And today I want to share with you 3:39 what we think is a solution to that end. 3:44 And it's the first time we're doing so openly. 3:47 It's a new kind of wearable device 3:51 and platform that's built entirely from the ground up 3:55 for artificial intelligence. 3:57 And is completely standalone. 3:59 You don't need a smartphone or any other device to pair with it. 4:05 In fact, I'm wearing one right now. 4:09 And it interacts with the world the way you interact with the world. 4:13 Hearing what you hear, seeing what you see. 4:18 While being privacy-first and safe 4:22 and completely fading into the background of your life. 4:27 We like to say that the experience is screenless, seamless 4:31 and sensing, 4:33 allowing you to access the power of compute 4:36 while remaining present in your surroundings, 4:40 fixing a balance that's felt out of place for some time now. 4:46 And I can't wait to share more details about what we've built, 4:51 and I will in the next few months. 4:53 But today I want to talk to you about what it unlocks. 4:56 And what it means to be able to take AI with you everywhere. 5:01 And what happens when technology increasingly disappears. 5:06 Technology becoming invisible affords us new opportunities 5:11 of how we interact with compute. 5:14 We've become so accustomed to tapping on an app 5:18 or moving a cursor with a mouse 5:21 that it feels second nature. 5:24 But that's by design. 5:26 When I was working on the iPhone, 5:28 I used to test interactions like slide-to-unlock 5:32 with my infant daughter. 5:34 She was the best possible focus group. 5:38 She’s 16 now, 5:39 and she's got a lot more ideas than she did back then. 5:44 This also, by the way, is the only non-AI generated image 5:48 that you'll see from me today. 5:49 And as I look at it now, 5:52 I see more than ever why a future driven by AI 5:57 is far better than a future that would involve more screens. 6:03 Like this. 6:05 He's cute, though. 6:08 But for the human-technology relationship to actually evolve beyond screens, 6:13 we need something radically different. 6:17 Let me show you. 6:21 Where can I find a gift for my wife before I have to leave tomorrow? 6:27 (Voice) Vancouver's Granville Island is a lively shopping district. 6:32 IC: That's an incredibly simple response for a very complex query. 6:37 How often do we find ourselves in a new city, 6:41 wrestling with our phones, 6:44 trying not to bump into people, 6:46 trying to figure out where we're going and where we're supposed to be? 6:50 It's even harder when we don't speak the language, right? 6:54 Let me show you something. 6:56 Invisible devices should feel so natural to use 7:00 that you almost forget about their existence. 7:06 (Voice speaking in French) 7:15 IC: You'll note that's me and my voice, speaking fluent French, 7:21 using an AI speech model that's part of my own AI. 7:26 This is not a deepfake. 7:29 In fact, it's deeply profound. 7:32 This is my AI giving me the ability to speak any language 7:37 and you having a chance to hear me speak that language 7:42 in my own emotion and my own voice. 7:48 Thank you. 7:49 (Applause) 7:54 This is moving away from the experiments that make us all concerned 7:57 about the direction compute is going in. 8:00 But it's instead using technology 8:03 to create real, responsible compute products 8:07 that are in service to us and built on trust. 8:13 This is good AI in action. 8:16 And we spent thousands of hours 8:18 reimagining and redesigning new types of compute interactions, 8:23 ranging from complex voice commands to intricate hand gestures, 8:29 all in service of trying to find more natural ways to interact with compute. 8:35 Why fumble for your phone when you can just hold an object 8:40 and ask questions about it? 8:42 The result almost feels like the entire world becomes your operating system. 8:48 And when compute disappears, 8:50 it allows us to get back to what really matters: 8:53 a new ability to be present. 8:59 Like riding a bicycle in the park and just ripping through emails 9:05 or going to a concert without having to hold up your phone to capture it. 9:13 Or experiencing your toddler's first steps 9:16 without a screen between you and your child. 9:20 In the future, 9:21 technology will be both ambient and contextual. 9:26 And this means harnessing AI to really understand you 9:30 and your surroundings 9:32 in order to achieve the best results. 9:36 Imagine this. 9:38 You've been in meetings all day 9:41 and you just want a summary of what you've missed. 9:46 Catch me up. 9:49 (Voice) Patrick is coming to tomorrow's design meeting. 9:52 Bethany wants to move next week's dinner, 9:54 and Oliver is asking about soccer this weekend. 9:57 IC: These are emails, calendar invites and messages, 10:01 all surfaced up to the top. 10:04 You can use these to help guide your decision making, 10:08 manage your workload 10:09 and sculpt tailored responses in your own voice. 10:14 And in the context of your life. 10:17 And we gain this context through machine learning. 10:21 The more you use our device powered by AI, 10:24 the more we can help you in all times of need. 10:29 Your AI effectively becomes an ever-evolving, 10:33 personalized form of memory. 10:36 And we think that's amazing. 10:39 In fact, let's say you're health conscious 10:43 or you have certain types of food considerations. 10:46 Let me just show you. 10:48 Picked up one of these chocolates. 10:50 Used to eat a ton of these when I was a kid. 10:54 Can I eat this? 10:59 (Voice) A milky bar contains cocoa butter. 11:01 Given your intolerance, you may want to avoid it. 11:05 IC: So I can’t eat these anymore. 11:08 (Applause) 11:13 But what's cool is my AI knows what's best for me. 11:16 But I'm in total control. 11:21 I'm going to eat it anyway. 11:23 (Laughter) 11:24 (Voice) Enjoy it. 11:25 (Laughter) 11:31 IC: Your AI figures out exactly what you need. 11:35 And by the way, I love that there's no judgment. 11:38 I think it's amazing to be able to live freely. 11:43 Your AI figures out what you need at the speed of thought. 11:48 A sense that will ever be evolving as technology improves too. 11:54 And these examples are just the start. 11:57 As AI advances, 11:59 we will see how it will transform nearly every aspect of our lives. 12:04 In ways that will seem unimaginable right now. 12:08 In fact, Sam Altman from OpenAI feels the way we do. 12:13 And that AI is grossly underestimated. 12:18 And I'll add, so long as we get it right. 12:22 We really believe 12:23 that we're only beginning to scratch the surface of what's possible. 12:27 Embed advancements of AI, like 12:31 in our device that's actually built to disappear 12:34 and allow experiences to come forward, 12:37 and we open up entirely new possible ways of how you interact with technology 12:43 and how you interact with the world around you. 12:47 More humane, intuitive interactions 12:50 that are screenless, seamless and sensing. 12:55 This is so much more than devices just getting smaller or more powerful. 13:01 This is the possibility of reimagining the human-technology relationship 13:07 as we know it. 13:09 And that's what's so exciting. 13:12 It's a huge challenge, no doubt. 13:16 But it's the world that we want to live in. 13:20 One where technology not only helps you get back into the world 13:26 but enhances our ability to do so. 13:29 It's within reach. 13:31 And you saw some of it today. 13:35 The future will not be held in your hand, 13:38 and it won't be on your face either. 13:41 The future of technology might almost be invisible. 13:47 Thank you. 13:48 (Applause)

TED2023.4.20---Greg Brockman:ChatGPT 驚人潛力的內幕




Chat YouTube:
OpenAI 的 CEO 討論了 AI 技術的現狀,並展示了一款用於聊天 GPT 的新應用程序,該應用程序使用 AI 生成圖像和文本。 該應用程序能夠保存信息供以後使用,並且可以與其他應用程序集成。 該技術旨在方便用戶使用,無需明確說明。


Q:用於聊天 GPT 的新應用程序的用途是什麼?
Chat YouTube:
新的聊天應用程序 GPT 的目的是使用 AI 生成圖像和文本,使用戶無需明確指示即可更輕鬆地創建內容。 它還可以保存信息供以後使用,並與其他應用程序集成。


Q:該應用程序如何生成圖像和文本?
Chat YouTube:
該應用程序使用 AI 生成圖像和文本,特別是使用一種稱為 GPT(生成式預訓練轉換器)的 AI 語言模型。 人工智能能夠理解用戶的意圖並以自然語言生成相關響應,並根據提供的提示生成圖像。 該應用程序旨在不斷學習和提高其生成質量更好的圖像和文本的能力。


Q:該技術如何使用戶界面更加人性化?
Chat YouTube:
該技術通過提供統一的語言界面使用戶界面更加人性化,用戶無需了解他們正在使用的各種應用程序的每個細節或菜單選項。 人工智能語言模型理解用戶的自然語言輸入,每次都無需明確指示即可進行選擇。 這節省了用戶瀏覽不同應用程序和工具的時間和精力。 此外,人工智能可以保存信息供以後使用,並與其他應用程序集成,使用戶體驗更加無縫和高效。


0:00 foreign we started open AI seven years ago 0:06 because we felt like something really interesting was happening in Ai and we wanted to help steer it in a positive 0:12 direction it's honestly just really amazing to see how far this whole field has come since 0:19 then and it's really gratifying to hear from people like Raymond who are using 0:24 the technology we are building in others for so many wonderful things we hear from people who are excited we 0:31 hear from people who are concerned we hear from people who feel both those emotions at once and honestly that's how we feel 0:40 above all it feels like we're entering an historic period right now where we as a world are going to define a technology 0:48 that will be so important for our society going forward and I believe that we can manage this 0:55 for good so today I want to show you the current state of that technology and some of the 1:01 underlying design principles that we hold dear 1:09 so the first thing I'm going to show you is what it's like to build a tool for an AI rather than building it for a human 1:17 so we have a new Dolly model which generates images and we are exposing it as an app for chat GPT to use on your 1:24 behalf and you can do things like ask you know suggest a nice 1:30 post Ted meal and draw a picture of it 1:38 now you get all of the sort of ideation and creative back and forth and taking 1:43 care of the details for you that you get out of chat GPT and here we go it's not just the idea for the meal but uh very 1:51 very uh detailed spread um so let's see what we're gonna get but 1:56 chat gbt doesn't just generate images in this case sorry it doesn't need to generate text it also generates an image 2:02 and that is something that really expands the power of what it can do on your behalf in terms of carrying out 2:07 your intent and I'll point out this is all that I've demo this is all generated by the AI as we speak so I actually 2:13 don't even know what we're going to see um this looks wonderful 2:20 thank you now I'm getting hungry just looking at it now we've extended Chachi petite with 2:26 other tools too for example memory you can say save this for later 2:32 um and the interesting thing about these tools is they're very inspectable so you get this little pop-up here that says 2:37 use the dolly app and by the way this is coming to all chat EPT users over upcoming months and you can look under 2:43 the hood and see that what it actually did was write a prompt just like a human could and so you you sort of have this ability 2:50 to inspect how the machine is using these tools which allows us to provide feedback to them now it's saved for 2:56 later and let me show you what it's like to use that information and to integrate with other applications too you can say 3:03 I now make a shopping list for the tasty 3:09 thing I was suggesting earlier and make it a little tricky for the AI 3:16 and tweet it out for all the Ted viewers out there 3:23 so if you do make this wonderful wonderful meal I definitely want to know how it tastes 3:28 but you can see that chat EBT is selecting all these different tools without me having to tell it explicitly 3:34 which ones to use in any situation and this I think shows a new way of 3:39 thinking about the user interface like we are so used to thinking of well we have these apps we cook between them we 3:45 copy paste between them and usually it's a great experience within an app as long as you kind of know the menus and know all the options 3:51 um yes I would like you to yes please always good to be polite 4:00 and by having this unified language interface on top of tools the AI is able 4:07 to sort of take away all those details from you so you don't have to be the one who spells out every single sort of 4:13 little piece of what's supposed to happen and as I said this is a live demo so uh sometimes the unexpected will happen to 4:20 us but let's take a look at the instacart shopping list while we're at it you can see we sent a list of 4:27 ingredients to instacart here's here's everything you need and the thing that's really interesting is 4:33 that the traditional UI is still very valuable right if you look at this uh you still can click through it and sort 4:39 of modify the uh the actual quantities and that's something that I think shows that that they're not going away 4:46 traditional uis it's just we have a new augmented way to build them and now we have a tweet that's been drafted for our 4:52 review which is also a very important thing we can click run and there we are we're the manager we're able to inspect 4:58 we're able to change the the work of the AI if we want to and so I after 5:05 this talk I you will be able to access this yourself 5:17 and there we go cool so thank you everyone 5:23 [Applause] 5:29 so we'll cut back to the slides now the important thing about how we 5:35 build this it's not just about building these tools it's about teaching the AI how to use them like what do we even 5:41 want it to do when we ask these very high level questions and to do this we use an old idea 5:48 if you go back to Alan turing's 1950 paper on the Turing test he says look you'll never program an answer to this 5:53 instead you can learn it you could build a machine like a human child and then teach it through feedback have a human 6:00 teacher who provides rewards and punishments as it tries things out and does things that are either good or bad 6:06 and this is exactly how we train chat GPT it's a two-step process first we produce what touring would have called a child machine through an unsupervised 6:12 learning process we just show it the whole world the whole internet and say predict what comes next in text you've 6:18 never seen before and this process imbues it with all sorts of wonderful skills for example if you're shown a 6:24 math problem the only way to actually complete that math problem say what comes next that green line up there is 6:30 to actually solve the math problem but we actually have to do a second step 6:36 too which is to teach the AI what to do with those skills and for this we provide feedback we have the AI try out 6:41 multiple things give us multiple suggestions and then human rates them says this one's better than that one and this reinforces not just the specific 6:48 thing that the AI said but very importantly the whole process that the AI used to produce that answer and this 6:54 allows it to generalize it allows it to teach to sort of infer your intent and apply it in scenarios that it hasn't 7:00 seen before that it hasn't received feedback now sometimes the things we have to teach the AI are not what you'd expect 7:06 for example when we first showed gpd4 to Khan Academy they said wow this is so great we're going to be able to teach 7:12 students wonderful things only one problem it doesn't double check students math if there's some bad math in there 7:19 it will happily pretend that one plus one equals three and run with it so we had to collect some feedback data 7:25 Sal Khan himself was very kind and offered 20 hours of his own time to provide feedback to the machine alongside our team and over the course 7:33 of a couple months we were able to teach the AI that hey you really should push back on humans in this specific kind of 7:39 scenario and we've we've actually made lots and lots of improvements to uh to to the 7:45 models this way uh and when you push that thumbs down in chat GPT that actually is kind of like sending up a bat signal to our team to say here's an 7:52 area of weakness where you should gather feedback um and so when you do that that's one way that we really listen to our users 7:58 and make sure we're building something that's more useful for everyone now providing high quality feedback is a 8:06 hard thing if you think about asking a kid to clean their room if all you're doing is inspecting the floor you don't know if 8:12 you're just teaching them to stuff all the toys in the closet this is a nice Dolly generated image by 8:17 the way and the same sort of uh reasoning applies to AI as we 8:25 move to harder tasks we will have to scale our ability to provide high quality feedback 8:30 but for this the AI itself is is happy to help it's happy to help us provide 8:36 even better feedback and to scale our ability to supervise the machine as time goes on and let me show you what I mean 8:42 for example you can ask for you know gpd4 question like this of how much time 8:48 passed between the these two foundational logs on uh unsupervised learning and learning from Human 8:53 feedback and the model says two months passed but is it true like these models 8:58 are not 100 reliable um other they're getting better every every time we we provide some feedback 9:04 um but we can actually use the AI to fact check it's and it can actually check its own work you can say fact 9:10 check this for me now in this case I've actually given 9:15 the AI new tool this one is a browsing tool where the model can issue search queries and click into web pages and 9:22 actually writes out its whole Chain of Thought as it does it it says I'm just going to search for this and it actually does the search it then it finds the 9:29 publication date in the search results um it then is issuing another search query it's going to click into the blog 9:34 post and all of this you could do but it's a very tedious task it's not a thing that humans really want 9:40 to do it's much more fun to be in the driver's seat to be in this manager's position where you can if you want triple check the work and outcome 9:47 citations so you can actually go and very easily verify any piece of this whole chain of 9:53 reasoning and it actually turns out two months was wrong two months in one week that was correct 10:01 [Applause] 10:07 and we'll come back to the slide and so thing that's so interesting to me about this whole process is that it's as many 10:14 step collaboration between a human and an AI because a human using this fact checking tool is doing it in order to 10:20 produce data for another AI to become more useful to a human and I think this really shows the shape 10:27 of something that we should expect to be much more common in the future where we have humans and machines kind of very 10:33 carefully and delicately designed in how they fit into a problem and how we want 10:38 to solve that problem we make sure that the humans are providing the management that oversight the feedback and the machines are operating in a way that's 10:45 inspectable and trustworthy and together we're able to actually even create even more stress worthy machines and I think 10:50 that over time if we get this process right we will be able to solve impossible problems 10:55 and to give you a sense of just how impossible I'm talking um I think we're going to be able to 11:01 rethink almost every aspect of how we interact with computers for example think about spreadsheets they've been 11:08 around in some form since you know we'll say 40 years ago with visicalc I don't think they've really changed that much 11:14 in that time and here is a specific spreadsheet of 11:19 all the AI papers on the archive for the past 30 years there's about 167 000 of 11:24 them and you can see their the data right here but let me show you the chat CPT take on how to analyze a data set 11:31 like this 11:37 so we can give chat GPT access to yet another tool this one a python 11:42 interpreter so it's able to run code just like a data scientist would and so you can just literally upload a file and 11:49 ask questions about it and very helpfully you know it knows the name of the file and it's like oh this is CSV comma 11:55 separate value file I'll parse it for you the only information here is the name of the file the column names like 12:02 you saw and then the actual data and from that it's able to infer what these 12:07 columns actually mean like that semantic information wasn't in there it has to sort of put together its World Knowledge 12:13 of knowing that oh yeah archive is the site that people submit papers and therefore that's what these things are 12:18 and these are integer values and so therefore it's a number of authors on the paper like all of that that's work for a human to do and ai's happy to help 12:25 with it now I don't even know what I want to ask so fortunately you can ask the machine 12:31 um can you make some exploratory graphs 12:37 and once again this is a super high level instruction with lots of intent behind it but I don't even know what I 12:42 want and the AI kind of has to infer what I might be interested in and so it comes up with some good ideas I think so 12:47 a histogram the number of authors per paper time series of papers per year word cloud of the paper titles all of 12:53 that I think will be pretty interesting to see and the great thing is it can actually do it here we go nice bell curve you see that 13:00 three is kind of the most common um it's going to then write a it's going to make this nice plot of the papers per 13:07 year something crazy is happening in 2023 though looks like we're on an exponential and it dropped off a cliff what could be going on there and by the 13:14 way all this is python code you can inspect and then we'll see the word cloud and so you can see all these wonderful things 13:20 that appear in these titles but I'm pretty unhappy about this 2023 thing it looks makes this year look 13:26 really bad of course the problem is that the year is not over so I'm going to push back on the machine 13:44 so April 13th was the cutoff date I believe can you use that 13:51 I think a fair projection so we'll see this is a kind of ambitious 13:56 one so you know again I feel like there was 14:03 more I wanted out of the machine here I really wanted it to like notice this thing it's maybe it's a little bit a 14:08 little bit of of an overreach for a two of sort of inferred magically that this is what I wanted but I inject my intent 14:16 I provide this additional piece of of you know sort of guidance and under the 14:21 hood the AI is just writing code again so if you want to inspect what it's doing it's very possible 14:26 and now it does the correct projection 14:35 if you notice it even updates the title I didn't ask for that but it knows what I want 14:41 now we'll cut back to the slide again this slide shows a parable of how I 14:49 think we you know a vision of how we may end up using this technology in the future 14:54 a person brought his very sick dog to the vet Who and the veterinary made a bad call to say let's just wait and see 15:01 and the dog would not be here today had he listened in the meanwhile he provided the blood 15:07 test like the full medical records to gpt4 which said I am not a vet you need to talk to professional here are some 15:14 hypotheses he brought that information to a second vet who used it to save the dog's life 15:21 now these systems they're not perfect you cannot overly rely on them but 15:26 this story I think shows that the human with a medical professional and with 15:33 chat EBT as a brainstorming partner was able to achieve an outcome that would not have happened otherwise I think this 15:38 is something we should all reflect on think about as we consider how to integrate these systems into our world 15:44 and one thing I believe really deeply is that getting AI right is going to require participation from everyone and 15:51 that's for deciding how we want it to slot in that's for setting the rules of the road for what an AI well and won't 15:56 do and if there's one thing to take away from this talk it's that this technology just looks 16:01 different just different from anything people had anticipated and so we all have to become literate and that's 16:06 honestly one of the reasons we release chatgpt together I believe that we can achieve 16:11 the open AI mission of ensuring that artificial general intelligence benefits all of humanity thank you 16:34 Greg wow I mean 16:39 I suspect that within every mind out here uh apart from there's a feeling of 16:45 reeling like I suspect that a very large number of people viewing this you look at that and you think oh my goodness 16:51 pretty much every single thing about the way I work I need to rethink like there's just new possibilities there am 16:57 I right who thinks that they're just they're having to rethink the way that we do things yeah I mean it's it's amazing but it's 17:04 also it's also really scary so let's let's talk Greg let's talk we have absolutely I mean I guess my first 17:09 question actually is just how the hell have you done this you know like open openai has a few hundred employees 17:16 Google has thousands of employees working on artificial intelligence 17:22 why is it you who's come up with this technology that shocked the world yeah well I mean the truth is we're all building on shoulders of giants right 17:29 there's no question if you look at the compute progress the algorithmic progress the data progress all of those are really really industry-wide but I 17:35 think within openai we made a lot of very deliberate choices from the early days and the first one was just to 17:41 confront reality as it lays and you know that we just sort of like thought really hard about like what is it going to take 17:47 to make progress here we tried a lot of things that didn't work so you only see the things that did and I think that the the most important thing has been to get 17:53 teams of people who are very different from each other to work together harmoniously 17:59 can we have the Water by the way I just brought here I think we're going to need it so try it hope you try a dry amount 18:04 off topic um but there's isn't there something also just about the fact that that you 18:11 saw something in these language models that meant that if you continue to 18:16 invest in them and grow them that something at some point might emerge 18:21 yes and I think that I mean honestly I think the story there is is pretty illustrative right I think 18:28 that at a high level deep learning like we always knew that was what we wanted to be was a deep learning lab and exactly how to do it like I think that 18:34 in the early days we didn't know we tried a lot of things and one person was working on training a model to predict 18:41 the next character in in Amazon reviews and he got a result where 18:46 this is a syntactic process you expect you know the model will predict where the commas go where the nouns and verbs 18:52 are but he actually got a state-of-the-art sentiment analysis classifier out of it that this model 18:58 could tell you if a review was positive or negative I mean today we were just like ah come on like anyone can do that 19:03 but this was the first time that you saw this emergence this sort of semantics that emerged from this 19:11 underlying syntactic process and there we knew you got to scale this thing you got to see where it goes so I think this helps explain the the Riddle That 19:18 baffles everyone looking at this because these things are described as prediction machines and yet what we're seeing out 19:24 of them feels it just feels impossible that that could come from a you know prediction machine 19:29 just the stuff you showed us just now and the key idea of emergence is that 19:34 when you get more of a thing suddenly different things emerge it happens all the time that ant colonies single ants run around when you bring enough of them 19:40 together you know you get these ant colonies that have show completely emergent and different Behavior or a 19:45 city where a few houses together it's just houses together but as you grow the number of houses things emerge like 19:52 suburbs and cultural centers and traffic jams um 19:57 give me one moment for you when you saw just something pop that just blew your mind that you just did not see coming 20:03 yeah well uh so if you you can try this in chat apt if you add 40 digit numbers 40 digits 40 digit numbers the model 20:10 will do it which means it's really learned a internal circuit for how to do it and the funny the really interesting 20:16 thing is actually if you have an ad like a 40 digit number plus a 35 digit number it'll often get it wrong 20:22 and so you can see that it's really learning the process but it hasn't fully generalized right it's like you can't 20:28 memorize the 40 Edition table that's more add-ins that aren't the universe so it had to have learned something general but that it hasn't really fully yet 20:35 learned that oh I can like sort of generalize this to adding arbitrary numbers of arbitrary lengths 20:40 so what's happened here is is that you've you've you've allowed it to scale up and look at an incredible number of pieces of text and it is learning things 20:47 that you didn't know that it was going to be capable of learning well yeah and it's it's more nuanced too because so 20:54 one science that we're starting to really get good at is predicting some of these emerging capabilities and to do 20:59 that actually one of the one of the things I think is very undersung in this field is sort of engineering quality like we had to rebuild our entire stack 21:05 and get you know like when you think about building a rocket like you know every tolerance has to be like incredibly tiny same is true in machine 21:12 learning you have to get every single piece of the stack engineered properly and then you can start doing these predictions there are all these 21:18 incredibly smooth scale on curves but I think tell you something deeply fundamental about intelligence if you look at our gpt4 blog post you can you 21:24 can see all these Curves in there and now we're starting to be able to predict so we were able to predict for example the performance on coding problems from 21:32 you know we basically look at some models that are ten thousand times or a thousand times smaller and so there's something about this that is actually 21:39 smooth scaling even though it's still early days so here is one of the big fears then 21:45 that arises from this if it's fundamental to what's happening here that as you scale up things emerge that 21:51 that you you can't you can maybe predict in some level of confidence but they still 21:57 it's capable of surprising you why isn't there just a huge risk of 22:02 something truly terrible emerging well I think all these are questions of degree and scale and timing and I think one 22:09 thing people miss too is sort of the integration with the world is also this like incredibly emergent like sort of 22:14 very powerful thing too and so that's one of the reasons that we think it's so important to deploy incrementally and so 22:19 I think that what we kind of see right now if you look at this talk a lot of what I focus on is providing really high 22:24 quality feedback today the task that we do you can inspect them right that it's very easy to look at that math problem 22:30 and be like no no machine like seven was the correct answer but even summarizing a book 22:36 like that's a hard thing to supervise like how do you know if this book summary is any good you have to read the whole book no one wants to do that 22:44 and so I think that the the important thing will be that we take this step by step and that we say okay as we move on 22:51 to book summaries we have to supervise this task properly we have to build up a track record with these machines that 22:56 they're able to actually carry it up carry out our intent and I think we're going to have to produce even better more efficient sort of more reliable 23:02 ways of scaling this sort of like making the machine be aligned with you so we're going to hear later in this 23:08 session there are critics who say that you know this there's there's no real 23:13 understanding inside the system is it going to always we're never going to know that it's not generating errors 23:20 that it doesn't have common sense and so forth is is it your belief Greg that that that is true at any one moment but 23:27 that the expansion of the scale and the human feedback you know that you talked about is basically going to 23:34 take it on that journey of actually getting to things like truth and wisdom and so forth with a high degree of 23:40 confidence how can you be sure of that yeah well I think that the opening eye I mean the three answers yes I believe 23:45 that is that is where we're headed um and I think that the open AI approach here has always been just like let reality hit you in the face right it's 23:52 like this field is the field of broken promises of all these experts saying X is going to happen why is how it works people have been saying neural Nets 23:59 aren't going to work for 70 years they haven't been right yet they might be right you know maybe 70 years plus one 24:04 or something like that is what you need but I think that our approach has always been you've got to push to the limits of this technology to really see it in 24:10 action because that tells you then oh here's how we can move on to a new paradigm and we just haven't exhausted 24:16 the fruit here I mean it's quite a controversial stance you've taken that the right way to do 24:21 this is to put it out there in public and then harness all this you know instead of just your team giving 24:27 feedback the world is now giving feedback but 24:33 if you know bad things are going to emerge it is out there so so you know 24:38 the original story that I heard on open AI when you were founded as a non-profit well you were there as the great sort of 24:45 check on the big companies doing their unknown possibly evil thing with AI and 24:51 you are going to you were going to build models that sort of um uh you know somehow held them 24:56 accountable and could was capable of slowing the field down if need be or at 25:02 least that's that's kind of what I had and yet what's happened arguably is the opposite that you that your release of 25:09 GPT the special chapter GPT put shock waves through the tech world that now 25:14 Google and meta and so forth are all scrambling to catch up and some of their criticisms have been you are forcing us 25:21 to put this out here without proper guardrails or we die you know how how do 25:26 you like make the case that what you have done is responsible here and not Reckless yeah we think we think about 25:32 these questions all the time like like seriously all the time and I think that that I don't think we're always going to 25:38 get it right um but one thing I think has been incredibly important like from the very beginning when we're thinking about how to build artificial general intelligence 25:44 actually have it benefit all of humanity like how are you supposed to do that right and that the default plan of being like well you built in secret you kind 25:51 of like you know get the super powerful thing and then you like figure out the safety of it and then you push go and you hope you got it right 25:57 like I don't know how to execute that plan okay maybe someone else does but for me that was always terrifying it 26:02 didn't feel right and so I think that that this alternative approach is the only sort of other path that I see which 26:09 is that you do let reality hit you in the face and I think you do give people time to give input you do have well before these machines are perfect before 26:16 they are super powerful that you actually have the ability to see them in action and we've seen it from gpt3 right 26:21 gpd3 we really were afraid that the number one thing people are going to do with it was generate misinformation try 26:27 to tip elections instead the number one thing was generating Viagra spam 26:34 hmm so Viagra spam is bad but there are things that are much worse if Here's a thought experiment for you 26:40 suppose you're you're sitting in a room there's a box on the table you believe that in that box is something that 26:47 there's a very strong chance it's something absolutely glorious it's going to give beautiful you know gifts to your family and and to everyone but there's 26:54 actually also a one percent thing in the small print there that says Pandora and uh there's a chance that 27:02 this actually could unleash unimaginable evils on the world do you open that box well so absolutely 27:09 not I I think I think you don't do it that way um and actually honestly like I'll tell 27:14 you a story uh that I haven't actually told before which is that uh shortly after we started open AI I remember I 27:19 was at I was in Puerto Rico for an AI conference I was sitting in the hotel room just like looking out over this wonderful water all these people having 27:25 a good time and you think about it for a moment like if you could choose for a like you know sort of potentially like 27:31 basically that Pandora's Box to be you know five years away or 500 years away 27:36 which would you pick right and like on the one hand you're like well like you know maybe for you personally it's better to like have it be five years 27:42 away but if it gets to be 500 years away and like people get more time to get it right like which do you pick and like 27:48 you know I just like really felt it in that moment I was like of course you do the 500 years like for real like there's many people like my brother is in the 27:54 military at the time and you're like he puts his life on the line in like a much more real way than like any of us you know typing things in in computers and 28:01 developing this technology um at the time and so yeah like I'm I'm really sold on the you've got to 28:06 approach this right but I don't think that's quite playing the field as it truly lies like if you look at the whole 28:13 history of computing like that I I really mean it when I say that this is a 28:18 industry-wide or even like sort of just almost like a human development of technology-wide shift and the more that 28:24 you sort of don't put together the pieces that are there right we're still 28:29 making to faster computers we're still improving the algorithms like all these things they are happening and if you don't put them together you get an 28:35 overhang which means that if someone does or you know that the moment that someone does manage to connect the 28:41 circuit then you suddenly have this very powerful thing no one's had any time to adjust like who knows what kind of 28:46 safety precautions you get and so I think that that one thing I take away is like even you think about the 28:52 development of other sort of Technologies think about nuclear weapons people talk about being like a zero to one sort of like you know sort of change 28:58 in what humans could do but I actually think that if you look at at capability it's been quite smooth over time and so 29:04 the history I think of every technology we've developed has been you got to do it incrementally and you've got to 29:10 figure out how to manage it for each moment that you're sort of increasing using it so what I'm hearing is that you that the 29:17 model you want us to have is that we have birthed this extraordinary child that may have superpowers that take 29:24 Humanity to a whole new place it is our Collective responsibility to provide the 29:31 guard rails for this this child to collectively teach it to be wise and not to pterosol or down is that basically 29:38 the model I I think it's true and I think it's also important to say this may shift right like we gotta take each 29:44 step as we encounter it and I think it's incredibly important today that we all 29:49 do get literate in this technology figure out how to provide the feedback decide what we want from it and I think 29:55 that my hope is that that will be continued to be the best path but it's so good we're honestly having this debate because we wouldn't otherwise if 30:01 it weren't out there Greg Brockman thank you so much for coming to Tad and blowing our minds thank you appreciate it

TEDxTexasStateUniversity2023.4.20---Ethan Ramirez:人工智能對藝術家的奇妙影響




Chat YouTube:
演講者討論了人工智能和機器學習程序在藝術和設計領域的應用, 強調了它們生成圖像和藝術品的能力。 與之前的假設相反, 這些技術現在能夠創造和生產出可以與人類創造力相媲美的藝術, 並且正越來越多地應用於廣告和營銷等各個行業。


Q:人工智能能否在藝術和音樂等領域.完全取代人類的創造力?
Chat YouTube:
對於這個問題,視頻並沒有給出明確的答案。 演講者承認人工智能在創造藝術和設計方面的能力,但也對這些技術變得過於先進的潛在後果表示擔憂。 同樣重要的是要注意, 創造力是一個複雜的過程,不僅涉及產生新想法,還涉及情感和體驗,人工智能可能難以完全複製這些情感和體驗。


Q:人工智能的藝術可以申請版權嗎
Chat YouTube:
根據目前的版權法律, 人工智能生成的藝術作品是否可以申請版權仍然存在爭議。 因為版權法通常要求作品由具有人格的個人或者個體創作者創作才能被保護,但是人工智能不具備人格。 如果人工智能生成的藝術作品能夠被認為是「原創」,並且有明確的創作者,那麼就可以申請版權。 但是這仍然是一個複雜的法律問題,需要相關法律條例的製定和相應的司法解釋才能有確定的結論。

TEDxVienna2023.4.10--- Stephen Wallace :合成生物學將幫助我們建設可持續發展的未來




Chat YouTube:
演講者討論了時尚和製藥行業對環境的影響,以及化石燃料在其生產中的作用。 他們認為,我們需要找到化石燃料的替代品,以應對氣候變化和污染, 並建議一種可能的解決方案是由微生物組成的活工廠。 演講者在合成生物學方面的研究. 探索了對微生物進行編程以創造更可持續的未來的可能性。


Q:合成生物學如何使我們能夠創建活的工廠?
Chat YouTube:
合成生物學使我們能夠通過對微生物進行編程. 以生產特定產品(例如藥物或可持續材料)來創建活的工廠。 通過基因工程和操作,合成生物學家可以設計微生物來執行特定功能並產生預期結果。 然後, 這些有生命力的工廠. 可用於通過減少廢物和污染. 以及利用自然過程和資源來創造更可持續的未來。

TED2023.4.10--- Ramez Naam:未來通往清潔能源的更快途徑




Chat YouTube:
在此視頻中, 演講者討論了清潔能源技術成本的快速下降,並以太陽能電池板為例, 自 1975 年以來,太陽能電池板的成本已從每瓦 100 美元下降到每瓦 20 美分。 成本下降幅度甚至超過專家預測的幅度預期, 因此,清潔能源的增長速度被低估了。 演講者認為,清潔能源是一種技術, 因此,隨著規模的擴大,成本會下降。 這種趨勢不僅限於太陽能電池板,還適用於各種清潔能源技術, 包括風能、浮動海上風能、海上風能、電池以及可為工業、航空和航運提供動力的其他燃料。 雖然清潔能源的成本下降速度不夠快,不足以保持在 1.5 攝氏度以下,


Q:根據視頻,除了太陽能電池板之外,還有哪些其他清潔能源技術的成本快速下降?
Chat YouTube:
根據視頻,除了太陽能電池板, 其他成本迅速下降的清潔能源技術 包括風能、漂浮海上風能、海上風能、電池和其他可為工業、航空和航運提供動力的燃料。


Q:浮動海上風能
Chat YouTube:
浮動海上風能是一種指安裝在海洋上的風力發電機,可以產生清潔的電力。 它是一種新興的技術,近年來已經得到了很大的關注和發展。 如果可以繼續降低浮動海上風能的成本,它有望成為一個重要的綠色能源來源,可以幫助減少對化石燃料的依賴,減少二氧化碳排放,幫助保護地球環境。 與陸地風力發電場不同的是, 浮動海上風能可以在深海中獲得更強勁的風力,因此能夠產生更穩定的電力輸出。 此外, 由於浮動海上風能可以在較深的水域操作,所以佔用的陸地面積較少,對野生動植物和陸地生態系統的影響也更小。 浮動海上風能雖然仍然是一種新興技術,但已經可以在一些地方看到應用,並且在未來可能會成為更廣泛使用的清潔能源之一。


Q:太空太陽能發電站
Chat YouTube:
太空太陽能發電站是一種理論上的太空基地,旨在為地球提供可再生的清潔能源。 它的工作原理是利用太空中的大量太陽能,通過太陽能電池板和輸送電纜來產生電力。 太空太陽能發電站在理論上可以獲得更強大的日照,同時不會受到地球大氣層和季節變化的阻礙,因此擁有比地球上太陽能發電站更高的發電效率。 然而, 由於技術和成本等原因,目前尚未實現太空太陽能發電站的商業運作。


Q:儲能電池
Chat YouTube:
儲能電池是指可以儲存電能,並在需要時再次釋放電能的電池。 儲能電池作為一種儲能技術,可以有效地解決能源儲存和使用不匹配的問題,是實現可持續發展的重要手段之一。 目前常見的儲能電池技術有鉛酸電池、鋰離子電池、鈕扣電池等。 其中, 鋰離子電池是目前應用最廣泛的一種儲能電池技術,具有高能量密度、長壽命、低自放電率和高充放電效率等優點,被廣泛應用於電動汽車、家用儲能系統、通訊設備等領域。 儲能電池可以在許多方面發揮作用, 例如平衡電力輸入和輸出、調節載荷和電網頻率、應對突發事故等。隨著清潔能源的快速發展,儲能電池的需求也將不斷增加,並在未來能源轉型和電力系統改造中發揮重要作用。

TED Talks2016.4.28---Danielle Feinberg :神奇的成分帶來皮克斯電影人生



TED Talks2016.4.28---Danielle Feinberg :神奇的成分帶來皮克斯電影人生
發佈日期:2016年4月28日
Danielle Feinberg, Pixar's director of photography, creates stories with soul and wonder using math, science and code. Go behind the scenes of Finding Nemo, Toy Story, Brave, WALL-E and more, and discover how Pixar interweaves art and science to create fantastic worlds where the things you imagine can become real. This talk comes from the PBS special "TED Talks: Science & Wonder.
神奇的成分带来皮克斯电影人生|丹妮尔铁... 丹妮尔范伯格,摄影皮克斯的导演,创建一个具有灵魂的故事和奇迹用数学,科学和代码。去海底总动员,玩具总动员,勇敢,WALL-E多的幕后,并发现皮克斯如何交织的艺术和科学创造梦幻般的世界里,你想象中的东西都可以成为真正的。本讲座来自PBS特殊的“TED演讲:科学奇迹。



==========Google 翻译==========

0:12When I was seven years old, 0:14some well-meaning adult asked me what I wanted to be when I grew up. 0:18Proudly, I said: "An artist." 0:20"No, you don't," he said, 0:22"You can't make a living being an artist!" 0:24My little seven-year-old Picasso dreams were crushed. 0:28But I gathered myself, 0:29went off in search of a new dream, 0:31eventually settling on being a scientist, 0:34perhaps something like the next Albert Einstein. 0:37(Laughter) 0:40I have always loved math and science, 0:42later, coding. 0:43And so I decided to study computer programming in college. 0:47In my junior year, 0:48my computer graphics professor showed us these wonderful short films. 0:52It was the first computer animation any of us had ever seen. 0:56I watched these films in wonder, transfixed, 0:59fireworks going off in my head,

1:01thinking, "That is what I want to do with my life." 1:05The idea that all the math, science and code I had been learning 1:08could come together to create these worlds and characters 1:11and stories I connected with, 1:13was pure magic for me. 1:16Just two years later, I started working 1:18at the place that made those films, Pixar Animation Studios. 1:21It was here I learned how we actually execute those films. 1:24To create our movies, 1:25we create a three-dimensional world inside the computer. 1:28We start with a point that makes a line that makes a face 1:32that creates characters, 1:34or trees and rocks that eventually become a forest. 1:37And because it's a three-dimensional world, 1:39we can move a camera around inside that world. 1:43I was fascinated by all of it. 1:45But then I got my first taste of lighting. 1:48Lighting in practice is placing lights inside this three-dimensional world. 1:52I actually have icons of lights I move around in there. 1:54Here you can see I've added a light, 1:56I'm turning on the rough version of lighting in our software, 1:59turn on shadows

2:00and placing the light. 2:02As I place a light, 2:03I think about what it might look like in real life, 2:06but balance that out with what we need artistically and for the story. 2:10So it might look like this at first, 2:14but as we adjust this and move that 2:16in weeks of work, 2:18in rough form it might look like this, 2:21and in final form, like this. 2:28There's this moment in lighting that made me fall utterly in love with it. 2:32It's where we go from this 2:34to this. 2:36It's the moment where all the pieces come together, 2:38and suddenly the world comes to life 2:41as if it's an actual place that exists. 2:44This moment never gets old, 2:45especially for that little seven-year-old girl that wanted to be an artist. 2:49As I learned to light, 2:50I learned about using light to help tell story, 2:54to set the time of day, 2:57to create the mood, 2:59to guide the audience's eye,

3:02how to make a character look appealing 3:05or stand out in a busy set. 3:08Did you see WALL-E? 3:09(Laughter) 3:10There he is. 3:13As you can see, 3:14we can create any world that we want inside the computer. 3:17We can make a world with monsters, 3:20with robots that fall in love, 3:23we can even make pigs fly. 3:25(Laughter) 3:30While this is an incredible thing, 3:32this untethered artistic freedom, 3:34it can create chaos. 3:36It can create unbelievable worlds, 3:39unbelievable movement, 3:41things that are jarring to the audience. 3:43So to combat this, we tether ourselves with science. 3:46We use science and the world we know 3:48as a backbone, 3:49to ground ourselves in something relatable and recognizable. 3:53"Finding Nemo" is an excellent example of this. 3:56A major portion of the movie takes place underwater. 3:59But how do you make it look underwater?

4:01In early research and development, 4:02we took a clip of underwater footage and recreated it in the computer. 4:06Then we broke it back down 4:08to see which elements make up that underwater look. 4:11One of the most critical elements 4:13was how the light travels through the water. 4:15So we coded up a light that mimics this physics -- 4:18first, the visibility of the water, 4:19and then what happens with the color. 4:22Objects close to the eye have their full, rich colors. 4:26As light travels deeper into the water, 4:28we lose the red wavelengths, 4:30then the green wavelengths, 4:31leaving us with blue at the far depths. 4:34In this clip you can see two other important elements. 4:37The first is the surge and swell, 4:39or the invisible underwater current 4:41that pushes the bits of particulate around in the water. 4:44The second is the caustics. 4:46These are the ribbons of light, 4:48like you might see on the bottom of a pool, 4:50that are created when the sun bends through the crests 4:52of the ripples and waves on the ocean's surface. 4:57Here we have the fog beams. 4:58These give us color depth cues,

5:00but also tells which direction is up 5:02in shots where we don't see the water surface. 5:04The other really cool thing you can see here 5:06is that we lit that particulate only with the caustics, 5:10so that as it goes in and out of those ribbons of light, 5:12it appears and disappears, 5:14lending a subtle, magical sparkle to the underwater. 5:18You can see how we're using the science -- 5:21the physics of water, light and movement -- 5:23to tether that artistic freedom. 5:26But we are not beholden to it. 5:28We considered each of these elements 5:30and which ones had to be scientifically accurate 5:33and which ones we could push and pull to suit the story and the mood. 5:37We realized early on that color was one we had some leeway with. 5:41So here's a traditionally colored underwater scene. 5:44But here, we can take Sydney Harbor and push it fairly green 5:47to suit the sad mood of what's happening. 5:50In this scene, it's really important we see deep into the underwater, 5:53so we understand what the East Australian Current is, 5:56that the turtles are diving into and going on this roller coaster ride. 5:59So we pushed the visibility of the water

6:01well past anything you would ever see in real life. 6:04Because in the end, 6:06we are not trying to recreate the scientifically correct real world, 6:10we're trying to create a believable world, 6:12one the audience can immerse themselves in to experience the story. 6:17We use science to create something wonderful. 6:20We use story and artistic touch to get us to a place of wonder. 6:25This guy, WALL-E, is a great example of that. 6:28He finds beauty in the simplest things. 6:30But when he came in to lighting, we knew we had a big problem. 6:33We got so geeked-out on making WALL-E this convincing robot, 6:37that we made his binoculars practically optically perfect. 6:40(Laughter) 6:43His binoculars are one of the most critical acting devices he has. 6:47He doesn't have a face or even traditional dialogue, for that matter. 6:50So the animators were heavily dependent on the binoculars 6:53to sell his acting and emotions. 6:56We started lighting and we realized 6:58the triple lenses inside his binoculars were a mess of reflections.

7:02He was starting to look glassy-eyed. 7:05(Laughter) 7:06Now, glassy-eyed is a fundamentally awful thing 7:10when you are trying to convince an audience 7:12that a robot has a personality and he's capable of falling in love. 7:16So we went to work on these optically perfect binoculars, 7:20trying to find a solution that would maintain his true robot materials 7:23but solve this reflection problem. 7:26So we started with the lenses. 7:27Here's the flat-front lens, 7:28we have a concave lens 7:30and a convex lens. 7:31And here you see all three together, 7:33showing us all these reflections. 7:36We tried turning them down, 7:37we tried blocking them, 7:39nothing was working. 7:41You can see here, 7:42sometimes we needed something specific reflected in his eyes -- 7:46usually Eve. 7:47So we couldn't just use some faked abstract image on the lenses. 7:50So here we have Eve on the first lens, 7:53we put Eve on the second lens, 7:55it's not working. 7:56We turn it down, 7:57it's still not working. 7:59And then we have our eureka moment.

8:01We add a light to WALL-E that accidentally leaks into his eyes. 8:06You can see it light up these gray aperture blades. 8:10Suddenly, those aperture blades are poking through that reflection 8:13the way nothing else has. 8:15Now we recognize WALL-E as having an eye. 8:19As humans we have the white of our eye, 8:22the colored iris 8:23and the black pupil. 8:24Now WALL-E has the black of an eye, 8:28the gray aperture blades 8:29and the black pupil. 8:31Suddenly, WALL-E feels like he has a soul, 8:35like there's a character with emotion inside. 8:40Later in the movie towards the end, 8:41WALL-E loses his personality, 8:43essentially going dead. 8:45This is the perfect time to bring back that glassy-eyed look. 8:49In the next scene, WALL-E comes back to life. 8:52We bring that light back to bring the aperture blades back, 8:55and he returns to that sweet, soulful robot we've come to love.

9:02(Video) WALL-E: Eva? 9:06Danielle Feinberg: There's a beauty in these unexpected moments -- 9:09when you find the key to unlocking a robot's soul, 9:13the moment when you discover what you want to do with your life. 9:17The jellyfish in "Finding Nemo" was one of those moments for me. 9:20There are scenes in every movie that struggle to come together. 9:24This was one of those scenes. 9:26The director had a vision for this scene 9:28based on some wonderful footage of jellyfish in the South Pacific. 9:33As we went along, 9:35we were floundering. 9:36The reviews with the director 9:38turned from the normal look-and-feel conversation 9:41into more and more questions about numbers and percentages. 9:46Maybe because unlike normal, 9:47we were basing it on something in real life, 9:50or maybe just because we had lost our way. 9:52But it had become about using our brain without our eyes, 9:55the science without the art. 9:58That scientific tether was strangling the scene.

10:02But even through all the frustrations, 10:04I still believed it could be beautiful. 10:06So when it came in to lighting, 10:08I dug in. 10:10As I worked to balance the blues and the pinks, 10:13the caustics dancing on the jellyfish bells, 10:16the undulating fog beams, 10:18something promising began to appear. 10:21I came in one morning and checked the previous night's work. 10:24And I got excited. 10:26And then I showed it to the lighting director 10:28and she got excited. 10:29Soon, I was showing to the director in a dark room full of 50 people. 10:34In director review, 10:35you hope you might get some nice words, 10:38then you get some notes and fixes, generally. 10:41And then, hopefully, you get a final, 10:43signaling to move on to the next stage. 10:46I gave my intro, and I played the jellyfish scene. 10:50And the director was silent for an uncomfortably long amount of time. 10:55Just long enough for me to think, 10:57"Oh no, this is doomed."

11:00And then he started clapping. 11:03And then the production designer started clapping. 11:06And then the whole room was clapping. 11:14This is the moment that I live for in lighting. 11:17The moment where it all comes together 11:19and we get a world that we can believe in. 11:22We use math, science and code to create these amazing worlds. 11:27We use storytelling and art to bring them to life. 11:30It's this interweaving of art and science 11:34that elevates the world to a place of wonder, 11:38a place with soul, 11:39a place we can believe in, 11:42a place where the things you imagine can become real -- 11:47and a world where a girl suddenly realizes 11:50not only is she a scientist, 11:52but also an artist. 11:54Thank you. 11:55(Applause)



0:12当我七岁, 0:14一些好心的大人问我什么,我想是当我长大。 0:18骄傲地说,我说:“一个艺术家。” 0:20 “不,你不这样做,”他说, 0:22 “你不能让生活成为一个艺术家!” 0:24我的小七岁的毕加索梦想被粉碎。 0:28但我自己收集的, 0:29在寻找一个新的梦想去了, 0:31最终解决当一名科学家, 0:34也许是这样的下一个爱因斯坦。 0:37 (笑声) 0:40我一直很喜欢数学和科学, 0:42后来,编码。 0:43所以,我决定学习大学计算机编程。 0:47在我大三的时候, 0:48我的计算机图形学教授向我们展示了这些精彩的短片。 0:52它是第一个电脑动画我们任何人见过。 0:56我看着惊讶这些电影,呆若木鸡, 0:59烟花燃放在我的头上,

1:01想:“这就是我想用我的生命做的。” 1:05所有的数学,科学和代码,我已学习的想法 1:08能走到一起,创建这些世界和人物 1:11和故事,我与连接, 1:13对我来说纯粹的魔法。 1:16仅仅两年后,我开始工作 1:18在,使得这些电影的地方,皮克斯动画工作室。 1:21正是在这里,我学会了如何实际执行这些电影。 1:24为了创建我们的电影, 1:25我们在计算机内部建立一个三维世界。 1:28我们先从一个点,使一个行做了个鬼脸 1:32创建人物, 1:34或树木和岩石,最终变成一片森林。 1:37而且因为它是一个三维的世界, 1:39我们可以解决这个世界里面移动的摄像机。 1:43我被这一切迷住了。 1:45但后来我得到了我的照明的第一口味。 1:48照明在实践中把灯这种三维的世界里面。 1:52其实,我有我的灯光在里面走动的图标。 1:54在这里,你可以看到我已经添加了一盏灯, 1:56我打开我们的软件版本粗糙照明, 1:59打开阴影

2:00和放置的光。 2:02当我把一盏灯, 2:03我想它可能看起来像在现实生活中, 2:06但平衡这一点与我们所需要的艺术和故事。 2:10因此,它可能是这样的,首先, 2:14但我们调整这个移动的 2:16在工作​​周, 2:18在恶劣的形式,它可能是这样的, 2:21并在最终的形式,像这样。 2:28有这一刻灯光让我在爱全然跌倒它。 2:32这就是我们从这走 2:34此。 2:36这也正是所有的作品走到一起的那一刻, 2:38突然世界来生活 2:41仿佛它是是否存在实际的地方。 2:44这一刻永远不会过时, 2:45特别是对于那些小七岁的女孩想成为一个艺术家。 2:49当我学会了光, 2:50我学习了如何使用光来讲述故事, 2:54设定一天中的时间, 2:57创造意境, 2:59引导观众的眼球,

3:02如何让一个角色的外观吸引力 3:05或在繁忙的集脱颖而出。 3:08你看到WALL-E? 3:09 (笑声) 3:10他在那。 3:13如你看到的, 3:14我们可以创造我们想要在计算机内部任何世界。 3:17我们可以做一个世界的怪物, 3:20与爱上机器人, 3:23我们甚至可以使才怪呢。 3:25 (笑声) 3:30虽然这是一件不可思议的事情, 3:32这种不受限制的艺术自由, 3:34它可以制造混乱。 3:36它可以创造令人难以置信的世界, 3:39令人难以置信的运动, 3:41东西都是不和谐的观众。 3:43因此,要解决这个问题,我们自己拴以科学。 3:46我们用科学和世界,我们知道 3:48作为主干, 3:49到地自己的东西听上去很像和辨认。 3:53 “海底总动员”就是一个很好的例子。 3:56影片的主要部分发生在水下。 3:59但是你如何使它看起来水下?

4:01在早期的研究和开发, 4:02我们采取的水下录像的剪辑,并在计算机重新创建它。 4:06然后,我们把它弄坏了回落 4:08看看哪些元素组成的水下的样子。 4:11一个最关键的要素的 4:13是光在水中是如何漂洋过海。 4:15因此,我们编写了一个光模仿这个物理 - 4:18首先,水的能见度, 4:19然后用什么颜色发生。 4:22靠近眼睛对象有充分,色彩丰富。 4:26由于光速更深的水, 4:28我们失去了红色的波长, 4:30那么绿色波长, 4:31留给我们的蓝色在最深处。 4:34在这个短片,你可以看到其他两个重要元素。 4:37首先是浪涌和膨胀, 4:39或无形的水下电流 4:41是推动颗粒状的比特围绕在水中。 4:44二是焦散。 4:46这些都是轻的丝带, 4:48就像你能在游泳池的底部看到, 4:50当太阳弯曲通过的峰部所创建 4:52的海洋表面的涟漪和波浪。 4:57在这里,我们有雾横梁。 4:58这给我们的色彩深度线索,

5:00但也告诉哪个方向是向上 5:02在射击,我们看不到水面。 5:04你可以在这里看到的其他很酷的事情 5:06是我们点燃的颗粒只与焦散, 5:10这样,当它进入和退出光的那些色带, 5:12它出现和消失, 5:14妆点微妙,神奇火花水下。 5:18你可以看到我们是如何使用科学 - 5:21水,光与运动的物理 - 5:23系绳艺术自由。 5:26但我们不会感激它。 5:28我们认为这些元素 5:30以及哪些必须是科学准确 5:33而哪些我们可以推拉,以适应故事和心情。 5:37我们早就该颜色意识到是一个大家有一些余地用。 5:41所以这里有一个传统的彩色水下的场面。 5:44但在这里,我们可以采取悉尼港,并推动它相当绿色 5:47以适应正在发生的事情的悲伤的情绪。 5:50在这个场景中,这是非常重要的,我们看到深入到水下, 5:53所以我们了解东澳洋流是什么, 5:56该龟潜入,并打算在这坐过山车。 5:59因此,我们推水的能见度

6:01早已过任何东西,你会看不到真实的生活。 6:04因为在最后, 6:06我们是不是要重新创建正确的科学现实世界中, 6:10我们正在努力创造一个可信的世界, 6:12一个观众可以沉浸在自己的体验故事。 6:17我们利用科学创造美好的东西。 6:20我们用故事和艺术感让我们到一个地方的奇迹。 6:25这家伙,WALL-E,就是一个很好的例子。 6:28他发现在最简单的事情美感。 6:30但是,当他进来的灯光,我们知道我们有一个很大的问题。 6:33我们有这么geeked出在制作WALL-E本有说服力的机器人, 6:37我们做了他的望远镜几乎完美的光。 6:40 (笑声) 6:43他的望远镜是他拥有的最关键的演技设备之一。 6:47他不具有面甚至传统对话,对于这个问题。 6:50所以,动画师都严重依赖于望远镜 6:53卖他的演技和情感。 6:56我们开始照明,我们意识到 6:58他的望远镜里的三镜头是反射的一个烂摊子。

7:02他开始寻找目光呆滞。 7:05 (笑声) 7:06现在,目光呆滞是从根本上可怕的事情 7:10当你试图说服观众 7:12该机器人具有个性和他的能力坠入爱河的。 7:16所以我们去对这些光学望远镜完善工作, 7:20试图找到将保持他的真实机器人的材料解决方案 7:23但解决这个问题的反思。 7:26于是我们开始用镜头。 7:27这里的平板前置镜头, 7:28我们有一个凹透镜 7:30和凸透镜。 7:31在这里,你可以看到所有三个人在一起, 7:33我们展示这一切的思考。 7:36我们试图把他们下来, 7:37我们试图阻止他们, 7:39什么也没有工作。 7:41你可以在这里看到, 7:42有时我们需要具体体现在他眼里的东西 - 7:46通常夏娃。 7:47所以我们不能只使用镜片上的一些伪造的抽象形象。 7:50所以在这里,我们有夏娃的第一个镜头, 7:53我们把夏娃第二透镜上, 7:55它不工作。 7:56我们把它记下来, 7:57它仍然没有工作。 7:59然后,我们有我们的灵光一现。

8:01我们增加一个光WALL-E不小心泄露他的眼睛。 8:06你可以看到它照亮这些灰色光圈叶片。 8:10突然,那些光圈叶片是通过反射戳 8:13没有别的办法了。 8:15现在我们认识到WALL-E为具有眼球。 8:19作为人类,我们有我们的眼白, 8:22有色光圈 8:23而黑色的瞳孔。 8:24现在WALL-E有眼睛的黑色, 8:28灰色的光圈叶片 8:29而黑色的瞳孔。 8:31突然,WALL-E觉得他有灵魂, 8:35像有感慨里面的人物。 8:40在电影后朝向端, 8:41 WALL-E失去了自己的个性, 8:43基本上会耗尽。 8:45这是为了带回目光呆滞的样子的最佳时机。 8:49在接下来的情景,WALL-E起死回生。 8:52我们将光回把光圈叶片背面, 8:55并且他回到那甜美,深情的机器人我们来爱。

9:02 (视频)WALL-E:伊娃? 9:06丹妮尔范伯格:有这些意想不到的时刻美容 - 9:09当你发现钥匙解锁机器人的灵魂, 9:13当你发现的那一刻,你想用自己的生命做什么。 9:17在“海底总动员”的水母是那些时刻对我来说之一。 9:20还有在每一个电影场景的努力走到了一起。 9:24这是那些场景之一。 9:26导演有一个梦想,这个场景 9:28基于水母在南太平洋的一些精彩镜头。 9:33当我们跟着去了, 9:35我们正在挣扎。 9:36与导演的评论 9:38从正常的外观和感觉谈话转向 9:41成左右数字和百分比越来越多的质疑。 9:46也许是因为不像正常的, 9:47我们在现实生活中的东西是基础, 9:50或者仅仅是因为我们失去了我们的方式。 9:52但它已成为使用我们的大脑没有我们的眼睛, 9:55科学没有艺术。 9:58认为科学是束缚扼杀了现场。

10:02但是,即使在所有的挫折, 10:04我仍然相信这可能是美丽的。 10:06因此,当它来到照明, 10:08我挖英寸 10:10至于我的工作是平衡的蓝色和粉红色的, 10:13焦散跳舞海蜇钟声, 10:16起伏的雾梁, 10:18一些有前景开始出现。 10:21我来到一个早晨,并检查了前一天晚上的工作。 10:24我很兴奋。 10:26然后我把它拿给照明董事 10:28她很兴奋。 10:29不久,我展示给导演在黑屋子里充满了50人。 10:34在导演审核, 10:35你希望你可能会得到一些好听的话, 10:38那么你得到一些笔记和修复,一般。 10:41然后,希望你得到一个最终的, 10:43信令来移动到下一个阶段。 10:46我给我介绍,我扮演的水母现场。 10:50和导演沉默令人不安长的时间。 10:55只是足够长的时间,我想, 10:57 “哦,不,这是注定要失败的。”

11:00然后,他开始鼓掌。 11:03然后制作设计师开始鼓掌。 11:06然后整个房间鼓掌。 11:14这是我在照明生活的时刻。 11:17在这一切走到一起的那一刻 11:19我们得到一个世界,我们可以相信的。 11:22我们使用数学,科学和代码来创建这些神奇的世界。 11:27我们用讲故事和艺术带给他们的生活。 11:30正是这种交织的艺术和科学 11:34这提升了世界一个地方的奇迹, 11:38用灵魂的地方, 11:39我们可以相信的地方, 11:42那里的东西,你能想象成为真正的一个地方 - 11:47和一个世界里,一个女孩突然意识到 11:50她不仅是一位科学家, 11:52但也是一个艺术家。 11:54谢谢。 11:55 (掌声)

TEDMED2016.5.12---Sangeeta Bhatia :納米癌症檢測儀


TEDMED2016.5.12---Sangeeta Bhatia :納米癌症檢測儀
發佈日期:2016年5月12日
Harvard-MIT physician, bioengineer and entrepreneur Sangeeta Bhatia shares how she led her multidisciplinary lab to develop an unusual breakthrough in cancer diagnostics.
While this talk was originally created with Sangeeta in partnership with TEDMED, we were honored to support her in also sharing this talk at TED Broadway in partnership with PBS.
哈佛 - 麻省理工医师,生物工程学家和企业家桑吉塔巴蒂亚股怎么她带领的多学科实验室,共同开发癌症诊断的一个不寻常的突破。
虽然这次谈话与桑吉塔合作最初创建与TEDMED,我们有幸邀请到了支持她还与PBS分享在TED百老汇这次谈话的合作伙伴关系。





==========Google 翻译==========

0:21single transistor we can now fit one billion and that made it so a computer 0:35the size of an entire room now fits in your pocket and that enabled the 0:41invention of things like smartphones and Fitbit than GPS tracking devices you 0:46might say the future 0:47small as an engineer and inspired by this new jersey Asian revolution in 0:53computers as a physician I'm optimistic about what miniaturization can do for 0:59human health I wonder whether we could use it to reduce the number of life by

1:06due to one of the fastest growing diseases on earth 1:10camper when I say that what most people hear me say that we're working on curing 1:15cancer and we are but it turns out that there's an extraordinary opportunity to 1:21save lives through the early detection and prevention of cancer worldwide over 1:26two-thirds of cancer death are entirely preventable using measures we already 1:31know about today they include vaccines cleaning and of course stopping smoking 1:38but even using the best tools and technologies that we have available we 1:43can't find some tumors until ten years after they've started growing when they 1:48are some 15 million cancer cells strong so what if we had better technologies to 1:54test the depth of the most deadly cancers earlier when they were just 1:58getting started and could be 1:59moved while the vision is that we could see even more alive let me show you how

2:04many trees Asian I get us there this is a microscope that you would find in a 2:09standard pathology lab and it would be used for inspecting it tissue specimen 2:13like a biopsy reports where the person looking in this $7,000 microscope would 2:19be a pathologist with the years of specialized training in how to spot 2:23cancer cells now compare that to this this is an image from my colleague at 2:28Rice University Rebecca Richards quarter in which she entertainment john is a 2:32miniaturised microscope so that it fits onto the end of an optical fiber and a 2:38$10 part now what that means is that instead of taking a sample from the 2:43patient to the microscope you can bring the microscope to the patient and then 2:49instead of relying on an expert to look at the images what she and her team are 2:53doing is treat to train the computer to automatically score healthy versus 2:57cancerous tissues so a decision to treat could be made in the moment it now this

3:02is important because what they found that even when they have mobile 3:06screening trucks that go out into the community and rural communities and 3:10perform cervical exams and take samples and send them to the lab that days later 3:15when women get called that an abnormal test results and asked to come in 3:19fully half of them don't turn up simply because they can't afford the trip now 3:24with the new fiber optic microscope and automatic computer scoring they've been 3:28able to create an integrated diagnostic and treatment than this man can go out 3:34into the community 3:35make a diagnosis and provide treatment in the moment and no one has lost to 3:40follow so that's one example of how miniaturization might save life now as 3:46engineers we think of this is straight at miniaturization you talk a big thing 3:50and you made it a little but I told you about before it was the computers 3:54changed our lives completely with a became small enough for us to take them 3:58everywhere so what does the transformational equivalent of that look

4:03like in cancer detection what if we had a detector that was so slow 4:09mall but it could travel through your body I could find the tumor all by 4:13itself and it could send a signal to the outside world but I'm a little bit like 4:19science fiction actually nanotechnology allows us to do just that Nana 4:25technology has allowed us to shrink the parts that make up the detector from the 4:30width of a human hair which is a hundred microns life a thousand times smaller to 4:35a hundred nanometers and that has profound implications at that link get 4:40what we call the nanoscale materials actually change their properties so you 4:45can take a very common material like gold looks golden in your jewelry if you 4:50grind it down into dust into nanoparticles it actually looks red you 4:56can take a more exotic material like this big black crystal of cadmium

5:01selenide if you make nanocrystals out of this material and put them in liquid and 5:07shine my time they glow and they glow blue green yellow orange red depending 5:15only on their 50 can you imagine an object like that in the macro world it 5:22would be like all the denim jeans in your qui vit are all made of cotton and 5:28they glow different colors just because they are different 5:31wild right okay so it's not just the color of materials that changes at the 5:38nanoscale gail was just as interesting to me is that the way they travel inside 5:44your body also changes so here's a little movie to show you what I need 5:48this is a blood vessel in the body and around the blood vessel is a tumor ok 5:54we're going to inject nanoparticles in the blood vessel and watch how they 5:58travel from the bloodstream into the tumor now it turns out that the blood

6:04vessels of many tumors are leaky and so the nanoparticles can leak out whether 6:09or not they leaked out to Penn 6:11on their side so in this image what you see is that the smaller blue hundred 6:17nanometers nanoparticles can leaked out the larger read five hundred nanometers 6:22particles stay behind that means as an engineer 6:27depending on how big or small I make a material I can see where it goes in your 6:33eyes that's the kind of insight that we're going to try to exploit to make a 6:38better cancer detector in my lab we recently did make a detector that was so 6:43small that it could circulate in the body looking for tumors we designed this 6:48detector to listen for tumor invasion the orchestra of chemical signals that 6:54tumors need to spread for a tumor to spread out of the tissue in which it's 6:59born they make chemicals called enzymes but two of the scaffolding of the two

7:04suits and we designed our detector to be activated by these enzymes one and I'm 7:12can perform a fastened chemical reactions in an hour and engineering we 7:17call this one to a thousand reach you a form of amplification it would typically 7:21make a technology ultra sensitive so we've made an ultra sensitive cancer 7:26detector ok so mad Italian just how do we get this signal that's in the tumor 7:31to the outside world where we can detect it for that we're going to use another 7:37aspect of nanoscale biology and that has to do with the kidney for the kidneys 7:42filter and its job is to filter out waste from the blood and put it into the 7:47urine it turns out that the kidney is a size-dependent filter so when this movie 7:53what you're saying is that the smaller 5 nanometers particle can get out into the 7:58urine and everything larger is retained in the blood

8:02ok so let's put these three concepts together and show you how it works so 8:07what we've done is make a hundred nanometers cancer detector we injected 8:12in the bloodstream it leaks out into the tumor 8:16activated by enzymes in the tumor and a signal is released that is small enough 8:22to get filtered out by the kidney into the urine in the urine it's in the 8:28outside world where we can detect it 8:30ok so there's one last challenge how do we detect the signal in the urine if 8:35it's so tiny so it turns out that these signals are just molecules and they're 8:40completely synthetic molecules that we engineer but not the molecules that the 8:45kidney normally put from the year and what that means is that we can design 8:48them to be read out by a tool of choice so for example if we make the molecules 8:54have a unique math we can detect them with a fancy sensitive instrument called 8:58the mass spectrometer if instead we're interested in something that's very

9:02portable and inexpensive we can design the molecules to be trapped on something 9:07like paper like a home pregnancy test the choices engineers is up to us and it 9:13depends on the application that we have in mind now for interested in 9:16portability and we do to something like paper it turns out there's a whole new 9:20world of formats better emerging in this new field of people diagnostics that we 9:26have to choose from 9:27ok so where we goin when I'm gonna tell you next is my perspective as a lifelong 9:35researcher and it represents a dream I have I can't say it's a promise but it 9:41is a dream and I think we all deserve to have dreams even and maybe especially 9:46cancer researchers when I'm going to describe to you is what I hope will 9:50happen with my technology and what my team and I will dedicate our heart and 9:55souls into trying to make a reality I dream that one day instead of going into 10:04an expensive facility for screening colonoscopy for mammogram for a pap

10:09smear you get a shop wait an hour and do a urine test on a paper strip 10:19I imagine that this could happen even without the need for steady electricity 10:22or a medical professional in the room maybe they would be far away and 10:27connected only by the image on the smartphone I hope that this means that 10:32we could find tumors much sooner than 10 years after they've started growing 10:37impatience all around the world in all walks of life that this will lead to 10:43earlier treatment and that we could save even more alive than we can today 10:47through early detection ok so I know that sounds like a dream but actually 10:53we've made incredible progress in the lab already in my life where this 10:57technology is working for coal in ovarian and lung cancer detection and

11:03then next steps are to try and bring this to patients so the things we have 11:06to do our manufacturer a lot of these detectors at scale we have to show their 11:12safe for use in humans and of course that it works in order to make that 11:16happen 11:17we've assembled a team dedicated to translating their technology from the 11:22lab to patients how about giving you a sense of the power miniaturization and 11:28medicine I just want to close with this one thought about our world which is 11:34that you often hear people talking about how it's a small world our world is 11:37getting smaller more interconnected now through virtual conversation then we 11:42have them in any other time 11:44time in human history for me it's a small world but in a different way the 11:50future is small it's a world where miniaturization and medicine can come 11:55together to help us understand monitor and treat the human body in molecular 12:02conversation at the tiniest skills and in this small world for opportunities 12:09for improving human health are enormous thank you


0:21单晶体管,我们现在可以适合在一个十亿,而且说得那么一台电脑 0:35整个房间的大小,现在适合你的口袋里,而且使 0:41事情像智能手机和Fitbit比GPS跟踪设备,你发明 0:46可以说未来 0:47小作为一名工程师,并在这个新的球衣亚洲革命的启发 0:53电脑作为一名医生,我什么小型化可以做乐观 0:59人类的健康我不知道我们是否可以用它通过减少生活中的数

1:06世界上增长最快的疾病,由于一个 1:10野营,当我说什么大多数人听到我说,我们正在努力固化 1:15癌症和我们,但事实证明,有一个绝佳的机会 1:21保存在世界各地通过生活癌症的早期发现和预防 1:26癌症死亡的三分之二是完全可以预防的使用措施我们已经 1:31知道今天他们包括疫苗清洗,当然戒烟 1:38但即使使用最好的工具和技术,我们有我们提供 1:43找不到某些肿瘤至十年之久,他们已经开始成长后,当他们 1:48是约1500万癌细胞强劲,因此,如果我们有更好的技术, 1:54测试最致命的癌症深度年初的时候他们只是 1:58入门和可能 1:59移动,而愿景是,我们可以看到更多的活着,让我告诉你如何

2:04许多树木亚洲我让我们有这是你会发现在一个显微镜 2:09标准病理实验室,这将被用于检查它组织标本 2:13像活检报告,而该人在寻找这个$ 7,000名镜会 2:19与在如何发现多年的专门培训病理学家 2:23癌细胞现在比较,这,这是从我的同事的图像在 2:28莱斯大学的丽贝卡·理查兹季度,她的娱乐约翰是个 2:32微型显微镜,使其上的光纤和一个的端 2:38 $ 10部分是现在的,而不是从采取样本是什么意思 2:43患者在显微镜你可以把显微镜给病人,然后 2:49而不是依靠一个专家看照片是她和她的团队 2:53做的是把培养计算机自动评分与健康 2:57癌组织所以治疗决定可能会在一瞬间就做出现在这个

3:02是因为他们发现了什么重要的是,即使他们拥有移动 3:06筛选卡车就往社区和农村社区, 3:10执行宫颈检查和取样,并将其发送到实验室的天后 3:15当女性得到所谓的异常检测结果,并要求进来 3:19充分一半不露面,只是因为他们现在不能承受之旅 3:24随着新的光纤显微镜和计算机自动打分,他们已经 3:28能够创建一个综合的诊断和治疗比这人可以走出去 3:34进社区 3:35作出诊断,并提供在当下治疗,没有人已经失去了 3:40遵循所以这就是如何小型化可能挽救生命的一个例子,现在作为 3:46我们认为这是工程师在直小型化你讲一件大事 3:50你做了一点,但我告诉你之前,它是电脑 3:54改变了我们的生活完全成了一个足够小的为我们带他们 3:58无所不在,所以又有什么样子的变革相当于

4:03像癌症检测,如果我们有一个探测器,这是这么慢 4:09商场但它可以穿越你的身体,我可以全部找到肿瘤 4:13本身它可以发送信号到外面的世界,但我很喜欢一点点 4:19科幻实际上纳米技术使我们能够做到这一点娜娜 4:25技术使我们能够缩小构成探测器从零件 4:30人的头发丝宽度这是一个几百微米的生活小一千倍 4:35一个几百纳米,并且具有在那个链接,即可产生深远的影响 4:40我们所说的纳米级材料真正改变它们的属性,所以你 4:45金象看起来你的首饰黄金可以采取一种很常见的材料,如果你 4:50磨它分解成灰尘进入纳米粒子,它实际上看起来是红色的你 4:56可以采取更奇特的材料,如镉这个大黑水晶

5:01硒如果你让纳米晶体出这种物质,并把它们放在液体 5:07照我的时候,他们发光,他们焕发蓝,绿,黄橙红色视 5:15只有靠50可您在宏观世界想象的对象一样,它 5:22会像所有的牛仔裤在归仁维生素都是由棉和 5:28他们焕发不同的颜色,只是因为他们是不同的 5:31野生右好了,所以它不只是在改变材料的颜色 5:38纳米盖尔就那么让我感兴趣的是,他们的方式出行内部 5:44你的身体也改变,这里有一个小的电影,向您展示我需要什么 5:48这是在体内和周围的血管的血管是肿瘤确定 5:54我们要注入纳米粒子在血管中,看它们如何 5:58从血液进入肿瘤旅行,现在事实证明,血液

6:04许多肿瘤的血管是漏水等纳米粒子能泄漏是否 6:09或者他们不泄漏到宾夕法尼亚 6:11他们一边所以在这种形象你看到的是,较小的蓝色百 6:17纳米粒子可以泄露出更大的读取500纳米 6:22颗粒留下,这意味着作为一名工程师 6:27根据我大还是小制作的材料,我可以看到它在你的 6:33眼睛是我们要去尝试利用做出敏锐的洞察力的一种 6:38更好的癌症检测仪在我的实验室最近我们也的确让探测器是如此 6:43小,它可以在体内循环寻找肿瘤我们设计这个 6:48检测器监听肿瘤浸润化学信号的乐团是 6:54肿瘤需要传播的肿瘤扩散在它的组织出 6:59出生他们让化学物质称为酶而是两个两个的脚手架

7:04西装,我们在设计检测到这些酶之一,我被激活 7:12可在一小时和工程我们进行紧固的化学反应 7:17称它是一千达到你放大的一种形式,它通常会 7:21做技术的超敏感的,所以我们做了一个超敏感的癌症 7:26探测器确定,所以疯狂的意大利只是我们如何得到这个信号,这是在肿瘤 7:31到外面的世界里,我们可以检测到它,我们要使用另一个 7:37纳米生物学方面和具有与肾脏的肾脏做 7:42过滤和它的工作是从血液中过滤掉垃圾,并把它放入 7:47尿事实证明,肾脏是一个尺寸相关滤波,所以当这部电影 7:53你的意思是,小的5纳米粒子可以伸到 7:58尿液和一切较大保留在血液

8:02确定,所以,让我们把这三个概念在一起,告诉你它是如何工作 8:07我们所做的就是让我们注入了几百纳米的癌症检测仪 8:12在血液中它漏出到肿瘤 8:16通过酶在肿瘤活性和信号被释放足够小 8:22得到过滤掉由肾脏进入尿液尿它在中 8:28外面的世界,我们可以检测到它 8:30确定,所以还有最后一个挑战,我们如何检测尿液,如果在信号 8:35它是如此的渺小所以事实证明,这些信号都只是分子和他们 8:40我们工程师而不是分子完全合成的分子,该 8:45肾脏通常从今年放什么意思是,我们可以设计 8:48他们通过所选择的工具,可以读出这样的例子,如果我们使分子 8:54有一个独特的数学运算我们可​​以称之为看中敏感的仪器检测到它们 8:58相反,如果我们的东西感兴趣质谱仪说的很

9:02便携和便宜,我们可以设计出被困的东西分子 9:07喜欢像一个家庭怀孕测试纸的选择工程师是由我们和它 9:13取决于我们心目中现在感兴趣的应用程序 9:16便携性和我们做的东西像纸原来有一个全新的 9:20格式的世界变得更加美好不断出现在人们诊断这一新的领域,我们 9:26必须从选择 9:27确定,所以我们要去的时候我会告诉你下一步是我作为一个终身的角度 9:35研究员,它代表着一个梦想我有我不能说这是一个承诺,但它 9:41是一个梦想,我想大家都应该有梦想,甚至,也许特别是 9:46当我打算向你描述癌症研究人员正是我希望会 9:50我的技术,什么我的团队发生,我将奉献我们的心脏和 9:55灵魂进入试图使现实中我的梦想,要在一天之内,而不是

10:04用于结肠镜筛查乳房X光为一个子宫颈抹片昂贵的设施 10:09涂抹你得到一个店等待一个小时,做一次尿检在纸条 10:19我想,这可能不需要稳定的电力,即使发生 10:22或在房间里的医疗专家,也许他们会远离和 10:27只能通过智能手机上的图像,我希望这意味着连接 10:32我们可以发现肿瘤更快超过10年,他们已经开始成长后, 10:37急躁的所有世界各地的各行各业,这将导致 10:43早期治疗,我们可以更加节省活着比我们今天 10:47通过早期检测好了,所以我知道这听起​​来像一个梦,但实际上 10:53我们已经在我的生活取得了令人难以置信的进步在实验室已经在那里这 10:57技术正在对煤在卵巢癌和肺癌的检测和

11:03那么接下来的步骤是,试图把这个给患者这样的事情,我们有 11:06做我们的生产厂家很多,这些检测器中的规模,我们必须展现自己 11:12安全用于人类,当然,它工作以便使该使用 11:16发生 11:17我们已经组建了一个专门从翻译他们的科技团队 11:22实验室病人怎么样让你的权力小型化的意识, 11:28药我只是想结束在这一思想对我们的世界是 11:34你经常听到人们谈论它如何是一个小世界我们的世界 11:37越来越更小,更通过虚拟会话目前互联的话,我们 11:42有他们在其他任何时间 11:44时间在人类历史对我来说这是一个小世界,但以不同的方式 11:50未来是小这是一个世界,小型化和药能来 11:55一起来帮助我们理解监控和治疗人体分子 12:02谈话在最微小的技能,并在这个小世界机会 12:09为改善人类的健康是巨大的谢谢

TED Talks2016.4.18---Alex Kipman :全息圖的時代


TED Talks2016.4.18---Alex Kipman :全息圖的時代
發佈日期:2016年4月18日
Alex Kipman wants to create a new reality — one that puts people, not devices, at the center of everything. With HoloLens, the first fully untethered holographic computer, Kipman brings 3D holograms into the real world, enhancing our perceptions so that we can touch and feel digital content. In this magical demo, explore a future without screens, where technology has the power to transport us to worlds beyond our own. (Featuring Q&A with TED's Helen Walters)
亚历克斯Kipman想要创建一个新的现实 - 一个让人们,而不是设备,在一切的中心。随着HoloLens,第一个完全不受限制的全息电脑,Kipman带来了3D全息图到现实世界,增强了我们的看法,使我们可以触摸和感受数字内容。在这片神奇的演示,探索未来没​​有屏幕,其中具有技术给我们运到超越我们自己世界的力量。 (设有Q&A与TED的海伦·沃尔特斯)



==========Google 翻译==========

0:07thousands of years from now we'll look back at the first century of computing 0:18has a fascinating but very picture of your time 0:23join time in history where humans reduced to lay of in 2d space heater 0:28Activa technology as it were machines a singular 100 your period the vastness of 0:36time where humans communicated were entertained and manager lives from 0:42behind a screen today we spend most of our time 0:47happy and looking at screens what happened to interacting with each other 0:53I don't know about you but I feel limited inside this 2d world of monitors

1:00and pixels it is this very limitation my desire to connect with people that 1:06inspires me as a creator put simply I want to create a new reality a reality 1:11where technology brings us closer to each other 1:17reality where people devices are the center of everything I dream of a 1:23reality where technology senses will we see touch and feel a reality 1:30acknowledged no longer gets in our way but he stayed in braces who we are 1:35I'd really technology on a human path we have all experienced technology that 1:43enables people to act more like people products that enable natural 1:47interactions voice controls for biometrics this is the next step in the 1:57evolution this is Microsoft holland's the first fully untethered holographic

2:04computer devices like this will bring 3d holographic content writing to our world 2:10and hence we experienced life beyond our ordinary range of perceptions now I'm 2:18not thinking about a distant future I'm talking about today we are already seen 2:24car companies like Volvo designing cars differently with Holland's universities 2:30like Case Western redefining to a medical students learn my personal 2:35favorite math is using holla lends to let scientists explore planets 2:42graphically now this is important by bringing holograms into our world not 2:48just talking about a new device or a better computer I'm talking about free 2:54yourselves from the 2d confines of traditional computing put it this way

3:02temporarily adjusted we're like keep people in computer terms barely discover 3:08charcoal and started drawing defer stick figures in our case now this is the 3:15perspective I applied to my work every single day and now for the next few 3:20minutes I invite all of you to apply the same perspective to the journey ahead of 3:27us now they put this Collins on we explained to set up a little bit 3:35probably the most risky demo we have ever done on any stage of Holland's I 3:43can't think of a better place to do it than here at TED momentarily we are all 3:48going I'm going to be seen holograms right on this stage just as clearly as I 3:53can see all of you now at the same time we have also this special camera that 3:58just walked in on stage so that all of you can share this experience with me up

4:03on all the monitors 4:07and what better place to begin our journey in the computer cave of 2d let's 4:15explore the world around us with this new lens and understand the computer 4:20world from a brand new perspective the computer universe is both marvelous in 4:27primitive it's a universe based on causality as developers we dreamed a 4:34different causes and then we program the different effects double click on an 4:38icon 4:39that's a cause open application that's in effect now we compared this to our 4:47physical universe it is overly constraining because our universe is not 4:53digital our universities animal our universe doesn't think in terms of zero 4:59or one true or false or black or white we exist in a world governed by quantum

5:07physics a universe of zero and one both at the same time a reality based on even 5:15if probabilities in shades of gray you can see how these two worlds collide so 5:24yr screen so pervasive in our analog life from the moment we wake up to the 5:33moment we fall asleep why I think it's because computers give a super powers 5:39within the digital universe we have the power to display space in the power to 5:44displace time doesn't matter of using technology for entertainment 5:49productivity or communication this way let's all go home tonight and watch our 5:55favorite show on television this is theater time in space this place as soon

6:03as I'm done with this TED talk I'm going to immediately call my lovely family in 6:07Seattle thats displacement of space 6:12now these are such great super powers that we put out with two-dimensional 6:17limitations of our current digital world but what if we didn't have to wonder if 6:23we could have the same digital powers our world you can already see glimmers 6:31of this but I believe our children's children will grow up in a world devoid 6:37of two new technology and it's remarkable to dream of this world a 6:42world where technology truly understands us where we live work and communicate 6:48with tools that enhance the human experience now machines that limit our 6:55humanity there for me the answer required looking at the problem from a

7:05different perspective 7:09required sensing the world from the perspective of a machine if your machine 7:14trying to censor world how would you actually break the problem down you 7:21would probably try to classified things as a human and environment or an object 7:29but the machine then interact with reality and i can think of three ways 7:38first as a machine I would observe why would you put reality speech recognition 7:46in biometric authentication are great examples of a machine interacting with 7:50humans from an implant perspective secondly as the machine I could please 7:57digital information or output information into reality

8:04holograms are examples of a machine interacting with an environment from an 8:08output respect finally as a machine I could exchange energy with the world 8:15yeah haptics now imagine being able to feel the temperature of a virtual object 8:21or better yet imagined pushing a hologram and had been pushed back with 8:27equal force with this perspective we are able to collapse reality is simple 8:34matrix now here's a secret as an engineer I get really excited 8:39anytime I can reduce something to the matrix from self-driving cars to 8:45smartphones 8:46holographic computer on my head machines are becoming capable of understanding I 8:52world and they are starting to interact with us he significantly more personal 8:58way now imagine

9:01granular control over everything in the world 9:06the dial one way 9:10and you get reality 9:13dial the other way when you get virtual reality now imagine dying during the 9:24entire environment between virtual and real laurels down here now imagine if I 9:38could look at all of you in the aisle from real humans into else when 9:45technology truly understands our world he will again transform the way we 9:49interact in ways we work in two ways we play 9:58less than half a century ago

10:01courageous man landed on the moon using computers that were less powerful than 10:07the phones in your pockets 600 million humans watch them on grainy 10:13black-and-white televisions and the world the world was mesmerized now 10:20imagine our children in their children will experience the continued 10:26exploration of space with technology that understands this moral we already 10:33live in a world where real-time universal translator sexists and I can 10:38screen and I can already see holographic telepresence in our near future 10:45in fact since we've been lucky with our demo so far has tried doing something 10:49else even more crazy I invite you to experience for the first time anywhere 10:55in the world 10:56here on the 10th stage in real life

11:00holographic teleportation between me and my friend that dr. Jeffrey Norris from 11:07NASA's Jet Propulsion Laboratory fingers cross high draft now work are you doing 11:17today jack 11:21so can you tell us a little bit about where you are well I'm actually in three 11:30places standing in a room across the street while I'm standing on the stage 11:36with you while I'm standing on mars hundred million miles away 11:42Wow hundred million miles away this is crazy 11:45can you tell us a little bit more about where all this data from mars is coming 11:49from 11:51absolutely this is a precise holographic replica of mars built from data captured 11:58by the curiosity Mars rover that I can explore as easily as a place on earth

12:03humans are natural explorers we can instantly understand environment just by 12:11being present in it we've built tools like our Mars rover to extend our vision 12:17and LinkedIn or reach but for decades we've explored from a seat behind 12:23screens and keyboards now we're leaping over all of that over the giant antennas 12:31and the relay satellites and the fastness between worlds to take our 12:35first steps on this landscape as if we were truly there today a group of 12:43scientists on a mission are seeing mars as never before 12:47million world made a little more familiar because they're finally 12:51exploring it as humans should but our dreams don't have to end with making it 12:58just like being there when we dialed this real-world to the virtual we can do

13:03magical things we can see invisible wave links or teleport the top of a mountain 13:09perhaps someday will feel the minerals in iraq just by touching it we're taking 13:17the first steps but the whole world to join us in taking the next because this 13:23is not a journey for a few but for all of us need to Jeff this is amazing thank 13:30you so much for joining us on the 10th stage today 13:50ancestors

14:07tribes where we interacted communicated and worked together we are all beginning 14:13to build technology that will enable us to return today humanity that brought us 14:19to where we are today 14:21technology that will let us stop living inside this new world of monitors in 14:27pixels and let us start remembering what it feels like to live in our 3d world 14:38phenomenal time to be human 14:41thank you 14:48talk about the difference between the demos and the reality of like the

15:01commercial product talk about this field of view issue is this is this type of 15:06experience what someone who buys the product will re-question more said 15:11better this is a question we've been receiving in the media for probably the 15:15last year do your research I have an answer that question 15:18purposely ignored it because ultimately the wrong question to ask 15:23that's the equivalent on me now showing hologram someone for the same for the 15:29first time and you dancing what's the sides of your television the field of 15:34you find a product it's almost irrelevant we should be talking about is 15:38the density of lights or radians 15:40better said what the angular resolution in some of the things that you see so 15:45from that perspective what you saw the cameras wearing a holla lands so even if 15:49I wanted to cheat i cant what you are seeing a different lens on it than 15:53alright but camera has fisheye lens on it rides like seeing a much wider view 15:59than the human eye is right if you think about the points of light that show lab

16:03radium from division of the camera which is the thing that matters how many 16:08points of lying dead in a given value that's the same as I get on this holla 16:13lands as I will win that one 16:14now this camera sees a much wider view of the world right Jesus Christ this way 16:29holographic Jeff Norris who thought it would happen if I really so in short to 16:36be super press the camera that you see on the screen has a wider field of view 16:40than the human eye by the angular resolution of the Holograms that you see 16:44the point of the points of light per unit of area are actually the same so 16:50you spent just got to you in a minute 16:54yeah so you spent a lot of time mapping the stage so I don't try to help me out 16:59here when I if I buy a Holland I have at home I don't need to match my apartment

17:03right the holoenzyme apps in real time and about five frames per second with 17:08this technology mccall space them out so your home as soon as you put it on me 17:13know holograms are showing up in your place in the midwest are learning their 17:16home in a stage environment where we're trying to get something on my head to 17:21communicate with something over there with all of the wireless connectivity 17:25that usually brings all conferences down we don't take the risk 17:29you know trying to do this live so what we do is we pre map the stage at five 17:34frames per second with the same spatial mapping technology they use with the 17:37product at home and then restore it so that when their shenanigans of wireless 17:42in an environment like this between set camera model and the one on my head 17:46we don't have you know things disappear ultimately the Holograms are coming from 17:50this holla lands and down the hall and so if I lose connectivity you were 17:56stopped seeing beautiful things on the screen and it was it was beautiful 17:59Jeff

18:04i think is happening here on a room across the street tell me more about the 18:14fact that with holograms don't have to have site but you don't have touched you 18:20don't have to smell is this scientifically useful now that's my 18:25question for the question absolutely I believe that these technologies are 18:31scientifically useful right now and that's why we're using them in multiple 18:35parts of our work at NASA so we're using it to improve the ways that we explore 18:40mars we're also using it for astronauts on the space station even using it now 18:46to design the next generation of our spacecraft amazing ok Jeff please go 18:52away thank you very much 18:54really that was amazing thank you thank you thank you thank you


0:07千百年来,从现在我们回头看计算的第一个世纪 0:18有你的时间迷人的,但很照片 0:23加入时间在历史上在人类减少在二维空间加热器打下的 0:28的Activa技术,因为它是机器奇异100月经浩瀚的 0:36在人类沟通的时间招待和经理自住 0:42垂帘听政今天我们花大部分的时间 0:47快乐,看着屏幕上所发生的相互交融 0:53我不知道你,但我觉得显示器的2D这个世界里面有限 1:00并且像素正是这种不限我的愿望,与人的连接

1:06激励我为简单的说我想创建一个新的现实成为现实的创造者 1:11在科技使我们更接近对方 1:17现实中,人们设备是一切的中心我梦想的 1:23现实里,技术的感官,我们将看到接触和感受现实 1:30承认不再得到我们的方式,但他在大括号住我们是谁 1:35我真的技术人力道路上,我们都有过的技术, 1:43使人们能够采取更喜欢的人的产品,使自然 1:47为生物识别交互语音控制,这是在下一步骤 1:57进化这是微软荷兰的第一个完全不受限制的全息

2:04计算机设备,如这将带来3D全息内容写我们的世界 2:10因此,我们经历了人生超出了我们的看法普通范围现在我 2:18没有考虑到一个遥远的未来,我讲今天我们已经看到了 2:24汽车企业沃尔沃一样的汽车设计不同与荷兰的大学 2:30像凯斯西重新定义为一个医学生学习我个人 2:35最喜欢的数学是用霍拉借给让科学家们探索的行星 2:42现在图形,这是通过将全息图进入我们的世界并不重要 2:48刚才谈到一个新的设备或更高级的计算机我谈论自由 2:54从自己传统计算的2D界限说吧

3:02临时调整我们喜欢让人们在计算机术语几乎没有发现 3:08木炭和着手制定推迟我们的情况下坚持数字现在这是 3:15角度来看,我应用到我的工作每一天,现在在未来数 3:20分钟,我邀请你们到相同的观点应用到旅程提前 3:27我们现在他们把这个柯林斯解释说,我们要建立一点点 3:35可能是最危险的演示中,我们曾经在荷兰我的任何阶段进行 3:43想不出一个更好的地方做比在TED瞬间我们都 3:48去我将可以看到在这个舞台上的全息图右只是尽可能清楚我 3:53现在可以在同一时间看到所有的你,我们也有这个特殊的摄像头, 3:58刚刚走过的舞台上让大家能分享我这方面的经验

4:03所有监视器上 4:07什么更好的地方开始我们的旅程中的2D咱们计算机洞穴 4:15探索我们周围的世界这一新的镜头和懂电脑 4:20从全新的视角计算机世界宇宙在两个奇妙 4:27原始它是一个基于因果关系作为开发商,我们的宇宙一个梦想 4:34不同的原因,然后我们编程的不同效果上双击 4:38图标 4:39这是一个原因打开的应用程序这实际上现在我们比较这对我们的 4:47物理宇宙是过分制约,因为我们的宇宙不是 4:53数字我们的大学动物我们的宇宙并不认为在零条款 4:59或者一个真或假,或黑或白,我们的世界存在量子管辖

5:07物理学在同一时间的零和一既宇宙现实基于偶数 5:15如果以灰色色调的概率就可以看到这两个世界是如何使碰撞 5:24年屏幕在我们的生活模拟如此普遍,从我们醒来的那一刻 5:33目前,我们睡着了,为什么我认为这是因为计算机给出一个超能力 5:39数字宇宙中也有显示在动力空间电源 5:44置换时间无所谓利用技术的娱乐 5:49生产率和通信这种方式让我们都回家了,今晚,看我们的 5:55在电视上最喜欢的节目,这是在太空这个地方戏剧的时间,尽快

6:03因为我这个TED演讲做了我会马上打电话给我可爱的家庭在 6:07这就是西雅图的空间位移 6:12现在这些是我们推出了带有二维这么大的超能力 6:17我们目前的数字世界,但什么样的限制,如果我们没有怀疑 6:23我们可以有相同的数字列强我们的世界,你已经可以看到曙光 6:31这一点,但我相信,我们的子孙将在一个缺乏世界长大 6:37两个新的技术,它的显着梦想这个世界的 6:42世界上技术真正了解我们,我们生活工作和交流 6:48与现在提升人类经验的机器,限制我们的工具 6:55人类有我的答案需要看着从问题

7:05不同的视角 7:09从如果你的机器一台机器的角度所需的传感世界 7:14试图审查的世界你将如何真正破解问题下来,你 7:21可能会尝试分类的东西作为人与环境或对象 7:29不过该机则与现实互动,我能想到的三种方式 7:38首先作为一个机器,我会观察你为什么要放在现实中的语音识别 7:46在生物认证是一台具有互动很好的例子 7:50从种植的角度来看人类其次为机器,我可以请 7:57数字信息或输出信息变成现实

8:04全息图的机器的实施例与从一个环境交互 8:08输出方面最后一台机器,我可以与世界交流的能量 8:15是的触觉现在想象能够感受到虚拟物体的温度 8:21或者更好的想象推全息图,并已被推回用 8:27这个角度来看,我们能够瓦解的现实同等的力量很简单 8:34现在矩阵这里有一个秘密作为一个工程师,我兴奋得不得了 8:39随时随地,我可以减少自驾车车东西到矩阵 8:45智能手机 8:46在我头上的机器全息电脑正在成为能够理解我的 8:52世界,他们开始与我们,他显著更多的个人互动 8:58现在的方式想象

9:01在世界上的一切细粒度控制 9:06表盘的一种方式 9:10你会得到现实 9:13拨通对方的方式,当你得到虚拟现实我们可以想象一下,临终期间 9:24虚与实的桂冠到这里的整个环境现在想象一下,如果我 9:38可以看看大家从现实人类的过道成否则当 9:45技术真正理解我们的世界,他将再次变换方式,我们 9:49以互动的方式在我们玩,我们两方面的工作 9:58不到半个世纪前

10:01勇敢的人降落在使用均低于强大的计算机月亮 10:07在你的口袋里手机6亿人看他们的颗粒感 10:13黑色和白色的电视和世界的世界被迷住了,现在 10:20想象我们的孩子在他们的孩子将经历持续 10:26随着技术的太空探索能够理解这种道德我们已经 10:33生活在一个世界里,实时通用转换性别主义和我可以 10:38屏幕上,我已经可以看到全息远程呈现在我们不远的将来 10:45事实上,因为我们很幸运我们的演示至今一直试图做的事情 10:49甚至还有更疯狂的我邀请你的第一次体验到任何地方 10:55在世界上 10:56这里在现实生活中的第10阶段

11:00我和我的朋友说,医生之间的全息量子隐形传态。杰弗里·诺里斯从 11:07美国宇航局喷气推进实验室的手指交叉,现在高草案的工作你在干什么 11:17今天插口 11:21所以你能告诉我们你在哪里一点点好,我在三个其实是 11:30地方站在街对面的一个房间,而我站在舞台上 11:36与你,而我站在火星亿英里远 11:42哇亿英里远这太疯狂了 11:45你能告诉我们一点点在哪里火星所有这些数据来了 11:49从 11:51这绝对是俘获从数据建立火星的精确副本全息 11:58受好奇心火星车,可以让我轻松地探索作为地球上一个地方

12:03人类是自然探险,我们可以只通过即时了解环境 12:11存在于它,我们已经建立了工具,比如我们的火星探测器扩大我们的视野 12:17和LinkedIn或达到,但几十年来,我们已经从后面的座位探讨 12:23屏幕和键盘现在我们跨越了所有这一切在巨型天线 12:31和中继卫星与世界之间的牢度,把我们 12:35这一景观的第一步,如果我们是真正有今天的一组 12:43在执行任务的科学家们看到火星从未有过 12:47百万世界做一点更熟悉,因为他们终于 12:51探索它作为人类应该,但我们的梦想没有与使它结束 12:58就像在那里,当我们拨打了这个真实的世界,我们可以做虚拟

13:03神奇的东西,我们可以看到看不见的波链接或瞬移一个山顶 13:09也许有一天会觉得伊拉克的矿物质仅仅通过触摸它,我们正在采取 13:17第一个步骤,但整个世界与我们一起服用下一因为这 13:23不几的旅程,但我们所有人都需要杰夫这是惊人的感谢 13:30你这么多的今天10号阶段加入我们的行列 13:50祖先

14:07在这里我们互动交流和部落一起工作,我们都开始 14:13以构建技术,使我们可以返回给我们带来了人性化的今天 14:19今天我们在哪里 14:21技术,将让我们阻止住显示器在这个新的世界里 14:27像素,让我们记住开始什么感觉喜欢住在我们的3D世界 14:38惊人的时间是人 14:41谢谢 14:48说说演示和像现实之间的差异

15:01商业产品说说看法问题,本场是这是该类型的 15:06经验是什么人谁买的产品会重新问题更说 15:11好,这是我们在媒体上陆续收到了可能是一个问题 15:15去年做你的研究我有一个问题的答案 15:18故意忽略它,因为最终错误的问题 15:23这是对我的现在相当于全息显示某人同为 15:29第一次和你跳舞什么是你的电视领域的两侧 15:34你找到一种产品,它几乎无关紧要,我们应该谈论的是 15:38灯光或弧度的密度 15:40还好说什么的一些东西的角分辨率,你看到这么 15:45从这个角度你看到穿着呼啦土地所以即使相机 15:49我想欺骗我不能你就可以看到什么不同的镜头比 15:53好吧,但相机有鱼眼镜头就可以骑就像看到一个更广泛的观点 15:59比人眼是正确的,如果你想想光,显示实验室点

16:03从摄像头,分工镭是重要的多少东西 16:08在给定值横尸这是相同的,因为我得到这个霍拉点 16:13土地,因为我会赢得那一个 16:14目前这款摄像头看到的世界第一右耶稣基督这样的一个更广泛的观点 16:29全息杰夫·诺里斯谁想到会发生这种事,如果我真的那么在短 16:36是超级按您在屏幕上看到的相机具有更宽的视野 16:40比你看到的全息图的角分辨率,人眼 16:44每单位面积的光的点的点实际上是相同的,所以 16:50你花了刚刚给你一分钟 16:54是啊,所以你花了很多时间绘制阶段,所以我不试图帮助我 16:59在这里,当我,如果我买了荷兰我在家里我不需要匹配我的公寓

17:03右实时全酶的应用和大约每秒五帧同 17:08这种技术麦考空间出来,使你的家,只要你把它给我 17:13知道全息图上显示的是您在中西部地区的地方都在学习他们的 17:16家在哪里,我们正在试图让我的头的东西舞台环境 17:21的东西在那里所有的无线连接进行通信 17:25通常将所有会议下来,我们不冒这个险 17:29你知道想要做这个活所以我们要做的就是,我们预映射在五个阶段 17:34每秒帧数与他们在使用相同的空间映射技术 17:37产品在家里,然后将其还原,这样,当他们的诡计无线 17:42在这样一套相机型号,并在我头上一之间的环境 17:46我们没有你知道的东西消失了,最终的全息图是从哪里来的 17:50这种呼啦土地和在大厅里等,如果我失去了你的连接 17:56停在屏幕上看到美丽的事物,这是很美 17:59杰夫

18:04我认为,发生在这里的街对面的房间告诉我更多关于 18:14事实上,随着全息不必有网站,但你没有感动你 18:20没有闻到这是有益的科学,现在这是我的 18:25对于这个问题的问题绝对相信这些技术 18:31现在科学有用的,这就是为什么我们使用他们在多个 18:35我们在美国航空航天局的部分工作,所以我们正在使用它来改善我们探讨如何 18:40火星我们也使用它为空间站宇航员上使用,即使现在 18:46设计我们的下一代飞船惊人OK杰夫请到 18:52远非常感谢你 18:54真的是惊人谢谢谢谢谢谢谢谢