Tan Le :能读懂脑电波的耳机





=========================
http://dotsub.com/view/c6510948-aca6-4e13-947f-c283fb532159
Tan Le :能读懂脑电波的耳机
直到现在,我们和机器的沟通的方式 都被限制在 一种有意识和直接的形式上。 无论是简单得像 打开灯的开关那样, 还是复杂得像编写一个机器人那样, 我们必须给机器一段指令, 或一系列的指令, 才能让机器为我们做我们想做的事情。 而人和人之间的交流 就远远要复杂和有趣些, 因为我们接受的 远比外露的表现要多。 我们观察脸部表情,肢体语言, 我们可以通过对话 感受到感觉和情感。 这些其实都是我们做决定的过程中的 一大部分。 我们的视野是向人机互动科技 介绍一个崭新的人类互动的 新领域, 这样的话,计算机就不仅仅 按照你的指令工作, 也能够根据你脸部表情 和感情 做出反应。 如果要这样做的话 还有什么能比得上 去理解我们脑部所发出的电波, 我们的控制和体验中心。
好,这听起来是一个十分好的主意, 但是这项任务,就像Burno提过的, 并不是那么简单,这主要有两个原因: 第一,检测的算法。 我们的大脑是由上亿个 活跃的脑神经所组成, 轴突的长度总共有 170,000 千米。 当这些脑神经互动时, 所产生的化学反应放出的电脉冲 是可以被测量的。 我们大脑的主要功能 是分布在大脑 的外表面层。 从心理的能力来说要去增加这个区域是可能的, 大脑表面充满了褶皱。 皮质折叠 对于解析表面电脉冲来说 是一个重大挑战。 每一个单独的皮层 其折叠的结构都是有区别的, 就像我们的指纹一样。 就算一个信号 可能来自于大脑同一功能的部分, 就在这个结构被折叠的时间里, 它的实际位置 是十分不同的, 就算是双胞胎也一样。 表层的信号 没有什么持续性。
我们的突破是创造一种计算方法 可以展开皮层, 这样我们可以在更靠近源头的地方 来接受信号, 从而就可以在更广泛的人群中使用。 第二个挑战是 观测脑部的实际装置。 脑电图一般是 一系列传感器的阵列, 就和你们在照片上所看到的一样。 技术人员使用导电胶或粘贴 将电极 放到头皮上 这通常会有一个光磨损的过程 来准备头皮。 这十分耗时 而且过程也不舒适。 加上,这个系统 要花费上百万美元。
我现在想邀请我们去年的 一位演讲者Evan Grant,上台来。 他很客气的同意了 来帮助我们来展示 我们的研究发展。
(掌声)
这个装置如你们所看见的 是一个14个通道,高保真 脑电采集系统。 不需要任何的头皮处理过程, 不用导流胶或导流膏。 只需要几分钟来固定好 和稳定信号。 这也是无线的, 所以这样就可以自由的移动。 和上百万美元的 传统脑电图系统比 这个装置只用 几百美元。 现在说说检测的算法。 所以脸部的表情 -- 就像我先前提到的感情表达一样 -- 都是意想不到的 通过一些敏感性的调整 使之个性化。 但是由于时间原因, 我向你们介绍一套认知系统, 系统所做的是 让你用你的意念来移动物体。
现在,Evan是第一次接触这个系统, 所以我们要先为他 创建新的个人信息。 他显然不是Joanne --所以我们选择“添加用户。” Evan。 搞定。 我们先要做的是 开始训练一个 中和的信号。 所谓中和,就是Evan不用 做任何事情。 他只是放松。 这个过程会给他建立一个地基 或是他大脑的普通模式, 因为每一人的大脑都是不一样的。 这大概需要8秒的时间。 好,现在完成了, 我们可以选择一个以移动为主的动作。 所以Evan选中一个 他可以在他大脑中现形的物体。
Evan:让我们来作“拉近。”
Tan:好,让我们选中“拉近。” 现在的目标是 Evan要去想象 这个物体会向 屏幕靠近。 在他做的同时,屏幕上会显示 一个进度条。 第一次,什么都没有。 因为系统不知到他所想的“拉近”是什么。 但是持续这个想象 程序8秒钟。 所以,1,2,3,开始。 好。 一旦我们接受这个, 方块就活起来了。 现在我们看Evan 能不能想象一下“拉近。” 哦,干的好! (掌声) 十分令人惊叹。
(掌声)
这样我们还有一点时间, 所以我们让Evan 做个难一点的任务。 这个比较难 因为这个是要想象 一个不存在我们现实世界里的物体。 这是“消失。” 所以,你要做的 -- 先做一个运动为主的动作, 我们在现实中一直在做这个动作,所以我们可以看到这个动作。 但是“消失”,从没有过。 所以Even,你现在要做的是 想象这个正方体会慢慢的消失掉,好吗。 跟刚才一样。所以,1,2,3,开始。 好,让我们试试。 哦,天哪。他太棒了。 我们在试一次。
Even:分心了。
(笑)
Tan:但是我们可以看到这是可行的, 就算你只花了一点点的时间 在这个上面。 就像我说的,去想象“消失” 是一个很难的过程。 了不起的事情是 我们只给了系统一个他如何想象“消失” 的例子。 因为这里有一个机器解析的过程 --
(掌声)
谢谢。 做的好,做的好。
(掌声)
谢谢你,Even,你是这项技术的 完美代表。
所以就像你之前看到的, 这个系统是被建入这个软件中 这样就算是Even,或者其他的用户, 都能更熟悉这个系统, 他们可以不停的加入更多的探测方式, 而系统也会在不同 的想法中区分不同的差别。 而且,一旦你训练好了探测功能, 这个功能会可以被分享到 任何一种计算器平台, 应用程序或装置中。
所以我想向你们展示一些例子, 因为这个新界面有很多 潜在的应用程序。 比如说在游戏和虚拟世界中, 你的脸部表情 可以直观的被用来 控制虚拟替身或人物。 显然的,你能体验到神奇的魔法 和用你的意念来控制世界。 颜色,和灯光, 声音和特效, 也会根据你的感情模式做出相应的反应 以此来提高你在现实中的体验。 现在看看世界各地的开发者和研究家们 所开发的应用程序, 用机器人和简单机械,比如说 -- 在这个例子里,通过想象提升来 简单的驾驶一个玩具直升机。
这个科技也可以被应用到 现实生活中 -- 比如,智能家庭。 你知道的,通过人机界面中的控制系统 来打开窗帘 或关闭窗帘。 当然还有照明 -- 开灯 或者关灯。 以及最后的, 可以改变生活的应用程序 就比如说控制电子轮椅。 在这个例子里, 脸部表情被用来控制移动命令。
男子:现在眨右眼往右。 现在眨左眼往左。 现在微笑往前。
Tan:我们十分感谢你 -- 谢谢。
(掌声)
我们今天仅仅大致地揭开了这个系统潜力的一角。 随着用户群体的投入, 开发者 以及世界各地的研究员的加盟, 我们希望你们可以帮助我们来 探寻这项技术将何去何从。
十分谢谢你们。


----------------------
Tan Le: A headset that reads your brainwaves
Up until now, our communication with machines has always been limited to conscious and direct forms. Whether it's something simple like turning on the lights with a switch, or even as complex as programming robotics, we have always had to give a command to a machine, or even a series of commands, in order for it to do something for us. Communication between people on the other hand, is far more complex and a lot more interesting, because we take into account so much more than what is explicitly expressed. We observe facial expressions, body language, and we can intuit feelings and emotions from our dialogue with one another. This actually forms a large part of our decision-making process. Our vision is to introduce this whole new realm of human interaction into human-computer interaction, so that computers can understand not only what you direct it to do, but it can also respond to your facial expressions and emotional experiences. And what better way to do this than by interpreting the signals naturally produced by our brain, our center for control and experience.

Well, it sounds like a pretty good idea, but this task, as Bruno mentioned, isn't an easy one for two main reasons: First, the detection algorithms. Our brain is made up of billions of active neurons, around 170,000 km of combined axon length. When these neurons interact, the chemical reaction emits an electrical impulse which can be measured. The majority of our functional brain is distributed over the outer surface layer of the brain. And to increase the area that's available for mental capacity, the brain surface is highly folded. Now this cortical folding presents a significant challenge for interpreting surface electrical impulses. Each individual's cortex is folded differently, very much like a fingerprint. So even though a signal may come from the same functional part of the brain, by the time the structure has been folded, its physical location is very different between individuals, even identical twins. There is no longer any consistency in the surface signals.

Our breakthrough was to create an algorithm that unfolds the cortex, so that we can map the signals closer to its source, and therefore making it capable of working across a mass population. The second challenge is the actual device for observing brainwaves. EEG measurements typically involve a hairnet with an array of sensors, like the one that you can see here in the photo. A technician will put the electrodes onto the scalp using a conductive gel or paste and usually after a procedure of preparing the scalp by light abrasion. Now this is quite time consuming and isn't the most comfortable process. And on top of that, these systems actually cost in the tens of thousands of dollars.

So with that, I'd like to invite onstage Evan Grant, who is one of last year's speakers, who's kindly agreed to help me to demonstrate what we've been able to develop.

(Applause)

So the device that you see is a 14-channel, high-fidelity EEG acquisition system. It doesn't require any scalp preparation, no conductive gel or paste. It only takes a few minutes to put on and for the signals to settle. It's also wireless, so it gives you the freedom to move around. And compared to the tens of thousands of dollars for a traditional EEG system, this headset only costs a few hundred dollars. Now on to the detection algorithms. So facial expressions -- as I mentioned before in emotional experiences -- are actually designed to work out of the box with some sensitivity adjustments available for personalization. But with the limited time we have available, I'd like to show you the cognitive suite, which is the ability for you to basically move virtual objects with your mind.

Now, Evan is new to this system, so what we have to do first is create a new profile for him. He's obviously not Joanne -- so we'll "add user." Evan. Okay. So the first thing we need to do with the cognitive suite is to start with training a neutral signal. With neutral, there's nothing in particular that Evan needs to do. He just hangs out. He's relaxed. And the idea is to establish a baseline or normal state for his brain, because every brain is different. It takes eight seconds to do this. And now that that's done, we can choose a movement-based action. So Evan choose something that you can visualize clearly in your mind.

Evan Grant: Let's do "pull."

Tan Le: Okay. So let's choose "pull." So the idea here now is that Evan needs to imagine the object coming forward into the screen. And there's a progress bar that will scroll across the screen while he's doing that. The first time, nothing will happen, because the system has no idea how he thinks about "pull." But maintain that thought for the entire duration of the eight seconds. So: one, two, three, go. Okay. So once we accept this, the cube is live. So let's see if Evan can actually try and imagine pulling. Ah, good job! (Applause) That's pretty amazing.

(Applause)

So we have a little bit of time available, so I'm going to ask Evan to do a really difficult task. And this one is difficult because it's all about being able to visualize something that doesn't exist in our physical world. This is "disappear." So what you want -- at least with movement-based actions, we do that all the time, so you can visualize it. But with "disappear," there's really no analogies. So Evan, what you want to do here is to imagine the cube slowly fading out, okay. Same sort of drill. So: one, two, three, go. Okay. Let's try that. Oh, my goodness. He's just too good. Let's try that again.

EG: Losing concentration.

(Laughter)

TL: But we can see that it actually works, even though you can only hold it for a little bit of time. As I said, it's a very difficult process to imagine this. And the great thing about it is that we've only given the software one instance of how he thinks about "disappear." As there is a machine learning algorithm in this --

(Applause)

Thank you. Good job. Good job.

(Applause)

Thank you, Evan, you're a wonderful, wonderful example of the technology.

So as you can see before, there is a leveling system built into this software so that as Evan, or any user, becomes more familiar with the system, they can continue to add more and more detections, so that the system begins to differentiate between different distinct thoughts. And once you've trained up the detections, these thoughts can be assigned or mapped to any computing platform, application or device.

So I'd like to show you a few examples, because there are many possible applications for this new interface. In games and virtual worlds, for example, your facial expressions can naturally and intuitively be used to control an avatar or virtual character. Obviously, you can experience the fantasy of magic and control the world with your mind. And also, colors, lighting, sound and effects, can dynamically respond to your emotional state to heighten the experience that you're having, in real time. And moving on to some applications developed by developers and researchers around the world, with robots and simple machines, for example -- in this case, flying a toy helicopter simply by thinking lift with your mind.

The technology can also be applied to real world applications -- in this example, a smart home. You know, from the user interface of the control system to opening curtains or closing curtains. And of course also to the lighting -- turning them on or off. And finally, to real life-changing applications such as being able to control an electric wheelchair. In this example, facial expressions are mapped to the movement commands.

Man: Now blink right to go right. Now blink left to turn back left. Now smile to go straight.

TL: We really -- Thank you.

(Applause)

We are really only scratching the surface of what is possible today. And with the community's input, and also with the involvement of developers and researchers from around the world, we hope you can help us to shape where the technology goes from here. Thank you so much.

No comments: