<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Human Machine Interfaces | Future Markets Magazine</title>
	<atom:link href="https://future-markets-magazine.com/en/innovators-en/hmi-innovators-en/feed/" rel="self" type="application/rss+xml" />
	<link>https://future-markets-magazine.com/en/innovators-en/hmi-innovators-en/</link>
	<description></description>
	<lastBuildDate>Tue, 29 Oct 2024 16:21:24 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.6.2</generator>

 
	<item>
		<title>Exciting Human Machine Interfaces Start-ups</title>
		<link>https://future-markets-magazine.com/en/innovators-en/exciting-human-machine-interfaces-start-ups/</link>
		
		<dc:creator><![CDATA[The Quintessence]]></dc:creator>
		<pubDate>Tue, 16 Jan 2024 13:20:48 +0000</pubDate>
				<category><![CDATA[Human Machine Interfaces]]></category>
		<category><![CDATA[Innovators]]></category>
		<guid isPermaLink="false">https://future-markets-magazine.com/?p=11985</guid>

					<description><![CDATA[<p>Start-ups around the world are working on innovative solutions for increasingly natural interactions with machines and&#8230;</p>
<p>The post <a href="https://future-markets-magazine.com/en/innovators-en/exciting-human-machine-interfaces-start-ups/">Exciting Human Machine Interfaces Start-ups</a> appeared first on <a href="https://future-markets-magazine.com/en/">Future Markets Magazine</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="p1"><span class="s1"><b>Start-ups around the world are working on innovative solutions for increasingly natural interactions with<span class="Apple-converted-space">&nbsp;</span>machines and computers. Here we present six exciting companies as examples of this inventive spirit.</b></span></p>
<h2 class="p2"><span class="s1"><b>Recognising Cognitive State</b></span></h2>
<p class="p3">The deep-tech brain monitoring developed by CorrActions utilises existing motion sensors, for example in the steering wheel, to detect the cognitive state of drivers and passengers. It analyses<span class="Apple-converted-space">&nbsp;</span>micro-movements that indicate a range of cognitive symptoms, such as whether a driver is drunk or overly tired. The existing hardware in the vehicle does not need to be changed.</p>
<p class="p4"><a href="http://www.corractions.com" target="_blank" rel="noopener noreferrer">www.corractions.com</a></p>
<h2 class="p2"><span class="s1"><b>Transparent Display</b></span></h2>
<p class="p3">The transparency of the 55-inch OLED panel developed by United Screens is about 40&nbsp;percent. This allows an object in the background to be seen through for staging purposes. An infrared touch frame captures up to ten simultaneous touch points and the interface invites intuitive interaction with the content shown. The display can be used as an eye-catcher at an event or for digital signage.</p>
<p class="p4"><a href="http://www.united-screens.tv" target="_blank" rel="noopener noreferrer">www.united-screens.tv</a></p>
<h2 class="p5"><span class="s1"><b>Feeling the Virtual World</b></span></h2>
<p class="p3">HaptX has developed Human Machine Interfaces in the form of gloves that simulate touch sensations using hundreds of microfluidic <a href="https://future-markets-magazine.com/en/encyclopedia/actuator/" target="_blank" title="A component which converts electronic signals into mechanical motion or other physical quantities, such as&hellip;" class="encyclopedia">actuator</a>s. This allows for natural interaction and true touch haptics in virtual reality and robotics. In a multiplayer collaboration, multiple users can work in the same virtual environment and touch the same objects.</p>
<p class="p4"><a href="http://www.haptx.com" target="_blank" rel="noopener noreferrer">www.haptx.com</a></p>
<h2 class="p2"><span class="s1"><b>Lightweight AR Glasses </b></span></h2>
<p class="p3">Kura&rsquo;s AR headset offers 95 percent lens transparency and a 150-degree field of view in a compact form factor that is almost the same as a normal pair of glasses. Thanks to the high transparency, natural eye contact is possible, and the user can perform other tasks while using the headset. No light escapes forward, so others cannot see the display.</p>
<p class="p6"><a href="http://www.kura.tech" target="_blank" rel="noopener noreferrer">www.kura.tech</a></p>
<h2 class="p2"><span class="s1"><b>Bandwidth for the Brain</b></span></h2>
<p class="p3">Paradromics is bringing to market a high-data-rate Brain Computer Interface. The first application of the interface is a BCI-capable communication aid for people with severe motor disabilities. Cortical modules record signals from more than 1,600 individual neurons; a cranial hub powers the cortical modules and completes signal processing.</p>
<p class="p4"><a href="http://www.paradromics.com" target="_blank" rel="noopener noreferrer">www.paradromics.com</a></p>
<h2 class="p8"><span class="s1"><b>Voice-based HMI</b></span></h2>
<p class="p3">Linguwerk has developed voice recognition specifically for intuitive human-machine communication. The solution incorporates a variety of input and output modalities into the HMI behaviour of the machine, device or assistant, in addition to voice recognition and speech output. With an individually configurable wake word, the voice interface can be activated touch-free and reliably.</p>
<p class="p4"><a href="http://www.linguwerk.de" target="_blank" rel="noopener noreferrer">www.linguwerk.de</a></p>
<p>The post <a href="https://future-markets-magazine.com/en/innovators-en/exciting-human-machine-interfaces-start-ups/">Exciting Human Machine Interfaces Start-ups</a> appeared first on <a href="https://future-markets-magazine.com/en/">Future Markets Magazine</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The inventor of the mouse</title>
		<link>https://future-markets-magazine.com/en/innovators-en/the-inventor-of-the-mouse/</link>
		
		<dc:creator><![CDATA[The Quintessence]]></dc:creator>
		<pubDate>Thu, 11 Jan 2024 10:39:46 +0000</pubDate>
				<category><![CDATA[Human Machine Interfaces]]></category>
		<category><![CDATA[Innovators]]></category>
		<guid isPermaLink="false">https://future-markets-magazine.com/?p=11879</guid>

					<description><![CDATA[<p>Douglas Engelbart’s numerous technological innovations were crucial to the development of personal computing. His work&#8230;</p>
<p>The post <a href="https://future-markets-magazine.com/en/innovators-en/the-inventor-of-the-mouse/">The inventor of the mouse</a> appeared first on <a href="https://future-markets-magazine.com/en/">Future Markets Magazine</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="p1"><span class="s1"><b>Douglas Engelbart&rsquo;s numerous technological innovations were crucial to the development of personal computing. His work helped make computers operable for everyone.</b></span></p>
<p class="p1"><span class="s1">I</span><span class="s1">t was December 1968. An as yet unknown scientist from the Stanford Research Institute stood before a silent crowd in San Francisco and began what would go down in history as &ldquo;the mother of all demos.&rdquo; In his 90-minute demonstration, he presented practically everything that would later define modern computer technology: video conferencing, hyperlinks, networked collaboration, digital text processing and something called a &ldquo;mouse&rdquo;.</span></p>
<h2 class="p3"><span class="s1"><b>The world of tomorrow</b></span></h2>
<p class="p2"><span class="s1">The scientist who demonstrated the potential of collaboration with computers to the astonished audience was Douglas Engelbart. Not a computer specialist, but an engineer and passionate inventor. &ldquo;Back then, most people thought that computers were only for computation&nbsp;&ndash; big brains to crunch numbers. The concept of interactive computing was alien,&rdquo; he recalled years later. &ldquo;It was hard for people to grok what we did at my lab, the Augmentation Research Center at SRI in Menlo Park. So I wanted to demonstrate the flexibility a computer could offer: the world of tomorrow.&rdquo;</span></p>
<h2 class="p3"><span class="s1"><b>Visionary engineer</b></span></h2>
<p class="p4">Engelbart (1925&nbsp;&ndash; 2013) graduated from high school in 1942 and then studied electrical engineering at Oregon State University. During World War II, he served as a <a href="https://future-markets-magazine.com/en/encyclopedia/radar/" target="_blank" title="Radio detection and ranging" class="encyclopedia">radar</a> technician. In 1948, he earned his bachelor&rsquo;s degree and worked for the NACA Ames Laboratory (a precursor to NASA). He then applied to the graduate program in electrical engineering at the University of California, Berkeley, and earned his Ph.D. in 1955. A year later, he left the university to work for the Stanford Research Institute (SRI).</p>
<p class="p4">At SRI, Engelbart acquired a dozen patents within two years and worked on magnetic computer components, fundamental phenomena of digital devices and the scalability potential of miniaturisation. In 1962, he published his visionary work &ldquo;Augmenting Human Intellect: A Conceptual Framework&rdquo;, in which he outlined his ideas for using computers to enhance human intelligence. Douglas Engelbart once articulated his motivation for his developments as follows: &ldquo;The complexity of the problems facing mankind is growing faster than our ability to solve them.&rdquo;</p>
<h2 class="p3"><span class="s1"><b>Enhancing human capabilities</b></span></h2>
<p class="p2"><span class="s1">Engelbart wanted to use technological innovations to expand human capabilities. His goal was not for people to have less to do because of technology, but for them to achieve more with it. He saw the computer as a suitable medium to support and enable human intellect, thereby allowing the resolution of highly complex problems more swiftly. An example of this extension of human intellect is the &ldquo;X-Y Position Indicator for a Display System&rdquo;, which allowed direct manipulation of elements on the screen. &ldquo;I first started making notes for the mouse in 1961. At the time, the popular device for pointing on the screen was a light pen, which had come out of the <a href="https://future-markets-magazine.com/en/encyclopedia/radar/" target="_blank" title="Radio detection and ranging" class="encyclopedia">radar</a> program during the war. It was the standard way to navigate, but I didn&rsquo;t think it was quite right.&rdquo; Who came up with the term &ldquo;mouse&rdquo; for the novel operating device, Douglas Engelbart later could not remember. It just looked somewhat like a mouse: a wooden box with a cable at one end. On top, a red button to click, and below, two wheels that transmitted movement impulses. </span></p>
<h2 class="p3"><span class="s1"><b>Part of a much larger project</b></span></h2>
<p class="p2"><span class="s1">&ldquo;The mouse was just a tiny part of a much larger project that aimed to improve human intellect,&rdquo; Engelbart said. Because this rudimentary-looking device greatly facilitated the operation of a graphical user interface&nbsp;&ndash; also a development by Douglas Engelbart and his team. A <a href="https://future-markets-magazine.com/en/encyclopedia/graphical-user-interface-gui/" target="_blank" title="A user interface that graphically displays information, typically with movable windows, buttons and icons." class="encyclopedia">graphical user interface (GUI)</a> uses visual elements such as windows, buttons and menus through which the user can interact with the software. Initially, computers consisted only of text blocks and required extensive knowledge of programming and computer interfaces to operate them.</span></p>
<h2 class="p3"><span class="s1"><b>Foundation for Apple&rsquo;s<span class="Apple-converted-space"> s</span>uccess</b></span></h2>
<p class="p2"><span class="s1">However, Engelbart was perhaps too far ahead of his time; the presentation at the Fall Joint Computer Conference and the significance of his inventions were quickly forgotten. Engelbart failed to convince SRI, investors or other potential sponsors of his vision. It wasn&rsquo;t until 1980 that he signed a licensing agreement with the two Apple founders Steve Jobs and Steve Wozniak for the patent of the mouse&nbsp;&ndash; receiving 40,000&nbsp;US dollars for it. Four years later, Apple introduced the &ldquo;Macintosh&rdquo; based on Engelbart&rsquo;s ideas: with a mouse and graphical user interface. Today, Engelbart&rsquo;s vision of a computer for everyone has long since become a reality. When he died in 2013, Apple co-founder Wozniak honoured him with the words: &ldquo;Everything we have in computers can be traced to his thinking. To me, he is a god. He gets recognised for the mouse, but he really did an awful lot of incredible stuff for computer interfaces and networking.&rdquo;</span></p>
<p>The post <a href="https://future-markets-magazine.com/en/innovators-en/the-inventor-of-the-mouse/">The inventor of the mouse</a> appeared first on <a href="https://future-markets-magazine.com/en/">Future Markets Magazine</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Artificial intelligence in HMI solutions</title>
		<link>https://future-markets-magazine.com/en/innovators-en/artificial-intelligence-in-hmi-solutions/</link>
		
		<dc:creator><![CDATA[The Quintessence]]></dc:creator>
		<pubDate>Thu, 11 Jan 2024 10:08:24 +0000</pubDate>
				<category><![CDATA[Human Machine Interfaces]]></category>
		<category><![CDATA[Innovators]]></category>
		<guid isPermaLink="false">https://future-markets-magazine.com/?p=11864</guid>

					<description><![CDATA[<p>Karl Lehnhoff, Director Segment Industrial, Scientific &#38; Medical at EBV Elektronik, on trends in the&#8230;</p>
<p>The post <a href="https://future-markets-magazine.com/en/innovators-en/artificial-intelligence-in-hmi-solutions/">Artificial intelligence in HMI solutions</a> appeared first on <a href="https://future-markets-magazine.com/en/">Future Markets Magazine</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="p1"><span class="s1"><b>Karl Lehnhoff, Director Segment Industrial, Scientific &amp; </b></span><b>Medical at EBV Elektronik</b><span class="s1"><b>, on trends in the field of Human Machine Interfaces.</b></span></p>
<p class="p1"><span class="s1">O</span><span class="s1">ur daily life is now permeated by technology. With the complexity of technologies that surround us every day, there&rsquo;s a growing need for Human Machine Interfaces that provide a positive user experience. The possibilities for interaction are becoming increasingly diverse: the range of HMI solutions extends from push-buttons to multi-touch screens to voice and gesture control. This breadth is something that particularly fascinates Karl Lehnhoff.</span></p>
<p class="p3"><span class="s2"><b>A light switch, a computer mouse, a neuroimplant for controlling a prosthesis&nbsp;&ndash; which of these do you consider a Human Machine Interface?</b></span></p>
<p class="p2"><span class="s3"><b>Karl Lehnhoff:</b> </span>That&rsquo;s easy to answer&nbsp;&ndash; all of them. Switches are a basic HMI. But other things like a computer mouse are clearly Human Machine Interfaces. Of course, there are high-tech devices like Brain Machine Interfaces, representing the most advanced variant of an HMI.</p>
<p class="p3"><span class="s2"><b>What technologies for Human-Machine Interaction are currently in high demand?</b></span></p>
<p class="p2"><span class="s4"><b>K.&thinsp;L.:</b></span><span class="s1"> Let&rsquo;s start with the classic: the switches. We use them everywhere. And we will continue to use them in the future. On the other hand, we are using more and more touchscreens, ranging from smartphones to those found in cars. In the future, we will integrate more haptic feedback. What&rsquo;s also becoming more established is voice recognition. We&rsquo;re already using it with smartphones or with Alexa and Siri in smart homes. It will be used in the future in other applications as well, for example in industrial production. Gesture control is also becoming increasingly popular.</span></p>
<p class="p2">But the HMI area also includes solutions for biometric <a href="https://future-markets-magazine.com/en/encyclopedia/authentication/" target="_blank" title="Ensures that the communication partner at the other end is authentic." class="encyclopedia">authentication</a>, such as facial recognition on phones or other applications, fingerprint scanners and iris scanners.</p>
<p class="p2"><span class="s5">What I also find exciting are augmented and virtual reality. At this year&rsquo;s Hannover Messe, for example, I observed a solution where a robot for the logistics sector is controlled via smart glasses. </span></p>
<p class="p3"><span class="s5"><b>Touchscreens are currently used almost everywhere&nbsp;&ndash; where is the development heading? </b></span></p>
<p class="p2"><span class="s4"><b>K.&thinsp;L.:</b></span><span class="s1"> Today, touchscreens are everywhere, and they are continually being improved and refined. We are already seeing 3D touchscreens and the possibility of integrating haptic feedback. In the future, touchscreens will also be combined with other HMI technologies such as gesture control.</span></p>
<p class="p3"><span class="s1"><b>How can haptic feedback be integrated into a touchscreen?</b></span></p>
<p class="p2"><span class="s6"><b>K.&thinsp;L.:</b></span> Typically, a motor or piezoelectric <a href="https://future-markets-magazine.com/en/encyclopedia/actuator/" target="_blank" title="A component which converts electronic signals into mechanical motion or other physical quantities, such as&hellip;" class="encyclopedia">actuator</a> is used today, which provides a tangible response. There are also displays that have tactile feedback layers. They &ldquo;create&rdquo; buttons under the display so that you can feel when you press them. Feedback is not only relevant for touchscreens but also for other applications where buttons behind glass provide feedback. Here too, a motor is often used to provide a mechanical response. This is even possible when the operator wears gloves&nbsp;&ndash; although in this case, the feedback force must be greater, perhaps in a range of 1 or 2G, otherwise, you don&rsquo;t feel it through the glove.</p>
<p class="p3"><span class="s1"><b>When we talk about HMIs today, we can hardly avoid AI. Besides its use in voice and gesture recognition&nbsp;&ndash; what role does AI play in HMIs? </b></span></p>
<p class="p2"><span class="s6"><b>K.&thinsp;L.:</b></span> We may not always recognise it, but we are already using a lot of AI today &ndash; for example with smartphones and for voice or facial recognition. It is often used in image processing. In the future, for example with voice recognition, it will be about improving language comprehension and recognising more words. AI is also needed for Brain Computer Interfaces, virtual and <a href="https://future-markets-magazine.com/en/encyclopedia/augmented-reality/" target="_blank" title="A combination of the perceived real world and virtual reality generated by computer. Users are&hellip;" class="encyclopedia">augmented reality</a>, or for predictive analysis. <a href="https://future-markets-magazine.com/en/encyclopedia/machine-learning/" target="_blank" title="Procedure by which computer systems acquire knowledge independently and can expand their knowledge, allowing them&hellip;" class="encyclopedia">Machine learning</a> will also play an increasingly important role, as it can enable the device to continue learning on its own, and perhaps even teach itself a new word.</p>
<p class="p3"><span class="s1"><b>What remains the key to an ideal<span class="Apple-converted-space">&nbsp;</span>user experience?</b></span></p>
<p class="p2"><span class="s6"><b>K.&thinsp;L.:</b></span> One of the most important points is recognition accuracy. I experience this every day with my car&nbsp;&ndash; it understands some words, not others. So we need to have high accuracy and reliability. AI can help here. But there are a lot of other criteria for a good HMI: <a href="https://future-markets-magazine.com/en/encyclopedia/data-protection/" target="_blank" title="Protection of the sensitive interests and privacy of natural persons and legal entities against misuse&hellip;" class="encyclopedia">data protection</a>, feedback, dealing with errors. But the most important points for me are reliability and accuracy.</p>
<p class="p3"><span class="s1"><b>With the range of technologies&nbsp;&ndash; how can EBV Elektronik help realise an HMI?</b></span></p>
<p class="p2"><span class="s6"><b>K.&thinsp;L.:</b></span> We are one of the leading specialists for semiconductors. So we can help with the selection of suitable technology. We can also advise customers on software. For this, we have our segment structure with the different market and technology segments. This is also reflected in the field, with our Field Application Engineers for technologies and systems. And we can also provide support in production, supply chain management and all the other things needed to bring a product to market.</p>
<p class="p3"><span class="s1"><b>Do you also work with your sister companies from the Avnet group? How does the customer benefit from this?</b></span></p>
<p class="p2"><span class="s6"><b>K.&thinsp;L.:</b></span> We do. For example, our sister company Avnet Abacus supplies connection technology, passive components and electromechanics. So we can cover the complete bill of materials. Avnet Embedded offers complete solutions for customer<span class="Apple-converted-space">&nbsp;</span>applications. They look after the development process, including production, and can work in a wide range of applications. And then we have our software specialist Witekio, which covers the entire development from the lower software levels to embedded applications and connectivity. EBV Elektronik is ultimately a full-service partner. This means the customer has only one point of contact through which they receive all the information and the entire service.</p>
<p class="p3"><span class="s2"><b>What do the current developments in the HMI sector mean from the semiconductor<span class="Apple-converted-space">&nbsp;</span>industry&rsquo;s perspective?</b></span></p>
<p class="p2"><span class="s6"><b>K.&thinsp;L.:</b></span> We clearly recognise the trend away from microcontrollers to microprocessors. This has a lot to do with the increasing functionalities of the touchscreen. This trend is now also transferring to other applications. The display itself is becoming more complex. This can no longer be met with microcontrollers. So in the future, it will depend on more powerful microprocessors. Of course, you also need more computing power to cater for the increasing use of artificial intelligence. Future 3D touchscreens will also require more powerful processors than in the past.</p>
<p class="p3"><span class="s1"><b>Which developments in the HMI sector do you find particularly exciting at the moment&nbsp;&ndash; and why?</b></span></p>
<p class="p2"><span class="s6"><b>K.&thinsp;L.:</b></span> Voice and gesture recognition &ndash; because they enable natural interaction like with a human. They are helpful in many situations, both when driving and in industry. But the key to all this will be artificial intelligence.</p>
<p class="p3"><span class="s1"><b>Do you think we will be able to control machines with our thoughts alone in the future?</b></span></p>
<p class="p2"><span class="s6"><b>K.&thinsp;L.:</b></span> That&rsquo;s hard to say. But from a medical point of view, it&rsquo;s a very interesting area. I hope this technology will be able to help people with physical limitations live better.</p>
<p class="p3"><span class="s1"><b>And do you understand the fears of many people when they think of a chip in their brain?</b></span></p>
<p class="p2"><span class="s6"><b>K.&thinsp;L.:</b></span> On the one hand, I understand the fears, but I think it&rsquo;s also a question of personal situation. Someone with a restriction and whose daily life could benefit from such a chip is likely to see the advantages. People who have no need for it are more likely to perceive it as a risk.</p>
<p class="p3"><span class="s1"><b>Finally: What fascinates you particularly about the topic of HMIs?</b></span></p>
<p class="p2"><span class="s6"><b>K.&thinsp;L.:</b></span> For me, the fascinating thing is that the field is so broad. Starting with the on/off switch through touchscreens and voice recognition to the Brain Computer Interface. And after the HMIs we have today, something new will come along. I don&rsquo;t know what it will look like, but people are investing time in research and development and trying to go new ways. I like that.</p>
<p>The post <a href="https://future-markets-magazine.com/en/innovators-en/artificial-intelligence-in-hmi-solutions/">Artificial intelligence in HMI solutions</a> appeared first on <a href="https://future-markets-magazine.com/en/">Future Markets Magazine</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Predictive robots: our future assistants</title>
		<link>https://future-markets-magazine.com/en/innovators-en/predictive-robots-our-future-assistants/</link>
		
		<dc:creator><![CDATA[The Quintessence]]></dc:creator>
		<pubDate>Fri, 05 Jan 2024 12:10:31 +0000</pubDate>
				<category><![CDATA[Current Edition]]></category>
		<category><![CDATA[Human Machine Interfaces]]></category>
		<category><![CDATA[Innovators]]></category>
		<guid isPermaLink="false">https://future-markets-magazine.com/?p=11503</guid>

					<description><![CDATA[<p>One of the biggest challenges in developing Human Machine Interfaces is natural interaction. Solutions such&#8230;</p>
<p>The post <a href="https://future-markets-magazine.com/en/innovators-en/predictive-robots-our-future-assistants/">Predictive robots: our future assistants</a> appeared first on <a href="https://future-markets-magazine.com/en/">Future Markets Magazine</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="p1">One of the biggest challenges in developing Human Machine Interfaces is natural interaction. Solutions such as gesture and voice control have already enabled significant progress to be made in this area. Recently, the focus has also shifted towards controlling machines with thoughts: Brain Machine Interfaces measure brain EEG signals and use these to derive control commands for computers, machines or robots. The DFKI is one of the pioneers in the use of EEG data for interaction with robotic systems. With its latest research project, EXPECT, the research centre is aiming to develop an adaptive, self-learning platform for human-robot collaboration. It should not only enable various forms of active interaction but also be capable of deriving human intention from gestures, speech, eye movements and brain activity: the machine should be able to anticipate what the human is going to do next. Professor Elsa Andrea Kirchner, project leader for the Robotics Innovation Center, provides insights into the project and the state of research on Brain Machine Interfaces.</p>
<p class="p3"><span class="s1"><b>Controlling machines with a brain chip is<span class="Apple-converted-space">&nbsp; </span>already being tested by companies such as Neuralink and Synchron. Is this indeed<span class="Apple-converted-space">&nbsp; </span>the future of Human Machine Interfaces?</b></span></p>
<p class="p2"><span class="s2"><b>Elsa Andrea Kirchner:</b></span><span class="s1"> The problem with interacting with the brain through implanted chips is that you can only reach the part of the brain where the chip is located. You would need to implant many of these chips to obtain a good picture of what the brain is doing. For some purposes, this procedure is certainly useful, such as for stimulating the brain in Parkinson&rsquo;s disease.<span class="Apple-converted-space">&nbsp; </span></span></p>
<p class="p2"><span class="s3"><b>What is the alternative?</b></span></p>
<p class="p2"><span class="s2"><b>E.&thinsp;A.&thinsp;K.:</b></span><span class="s1"> Brain activity can also be measured externally, using electrodes attached to the head. However, when doing this, you are always measuring a sum of activities, so the resolution is lower than with an implanted electrode. Additionally, there is more noise because brain currents are measured through the skin, bones and hair. You therefore need powerful devices to record them, and you need good signal processing and <a href="https://future-markets-magazine.com/en/encyclopedia/machine-learning/" target="_blank" title="Procedure by which computer systems acquire knowledge independently and can expand their knowledge, allowing them&hellip;" class="encyclopedia">machine learning</a> to interpret the data correctly.<span class="Apple-converted-space">&nbsp; </span></span></p>
<p class="p2"><span class="s3"><b>What is your approach in the EXPECT project?</b></span></p>
<p class="p2"><span class="s2"><b>E.&thinsp;A.&thinsp;K.:</b></span><span class="s1"> Quite often, when interacting with another person, we can tell what they want to do or expect from us. For example, a colleague hands me a tool because I am looking at it and he is standing right next to it. This is also something you want to achieve in interactions with machines. You don&rsquo;t always want to explicitly tell the machine every single step; you want the machine to understand it on its own. There are many ways to achieve this, and one of these ways is the direct use of brain activity.</span></p>
<p class="p3"><span class="s1"><b>Does every person think the same way? Does an EEG always show the same thing, regardless of who is thinking the thought &ldquo;Robot, open the gripper&rdquo;?</b></span></p>
<p class="p2"><span class="s2"><b>E.&thinsp;A.&thinsp;K.:</b></span><span class="s1"> Our brains are organised very similarly. So, we have the same areas in similar locations. But we also know that there are significant differences between people. This means that, once you have measured a person&rsquo;s brain activity with an EEG and analysed it using <a href="https://future-markets-magazine.com/en/encyclopedia/machine-learning/" target="_blank" title="Procedure by which computer systems acquire knowledge independently and can expand their knowledge, allowing them&hellip;" class="encyclopedia">machine learning</a>, you cannot simply transfer the trained model to another person. The performance might decrease by 20 or 30 percent. So, we have a number of challenges to overcome in training the models so that they can be used for several individuals, and this is one of the EXPECT project&rsquo;s objectives.</span></p>
<p class="p3"><span class="s4"><b>Your platform for human-robot collaboration relies on various forms of active interaction, not just thoughts. Why is that?</b></span></p>
<p class="p2"><span class="s2"><b>E.&thinsp;A.&thinsp;K.:</b></span><span class="s1"> Imagine a stroke patient who, for example, is unable to move their right arm. Even in such a case, there is still some planning of movement happening in the brain. We can recognise that and move the arm using an exoskeleton. However, we encounter some problems with this approach. First, our interpretation is not 100&nbsp;percent accurate. The second problem is that when a person thinks about a body movement, they may not necessarily want to execute it. </span></p>
<p class="p2"><span class="s1">Most patients still have a tiny amount of muscle activity even after a stroke, which we can utelise. First, we interpret the EEG signal and recognise that the patient is thinking about a movement. At the same time, we monitor the patient&rsquo;s muscles and if we detect activity, we know that they truly want to perform the movement.</span></p>
<p class="p2"><span class="s1">This combination of different signals is crucial because if an exoskeleton suddenly moves the arm without the person wanting it, they may feel like they&rsquo;ve lost control and the exoskeleton has taken over their free will.</span></p>
<p class="p3"><span class="s1"><b>In what other cases does it make sense to use various means of interaction?</b></span></p>
<p class="p2"><span class="s2"><b>E.&thinsp;A.&thinsp;K.:</b></span><span class="s1"> Take speech recognition, for example. Often, the environment is too noisy for it. In our projects, we are working on combining EEG and speech to ensure accurate speech recognition. Simultaneously, speech recognition can help us better interpret the EEG. During the training phase, one might say, &ldquo;Please fetch the hammer&rdquo; while measuring the EEG. Later, you only have to think it and the robot will understand it based on brain activity.</span></p>
<p class="p3"><span class="s1"><b>The main goal of your project is for the<span class="Apple-converted-space">&nbsp; </span>machine to anticipate the human&rsquo;s intention.<span class="Apple-converted-space">&nbsp;</span>In which applications does this make sense?</b></span></p>
<p class="p2"><span class="s2"><b>E.&thinsp;A.&thinsp;K.:</b></span><span class="s1"> Sometimes you find yourself working with people who already know what they should do before you tell them anything. We find this to be a very positive experience. The same applies to when we work with a machine: there are situations where it would be better if the system knew my intention.</span></p>
<p class="p2">Imagine wearing an exoskeleton and trying to repair something overhead. The exoskeleton supports your arm and keeps it raised, which is initially helpful. But at some point you&rsquo;ll finish your work and want to lower your arm again. Although the sensors recognise this, for a moment you still have to work against the exoskeleton. If we can recognise the planning of arm movements in the brain, the system can prepare for it and react faster. We have already tested this with individuals, and they could really feel the differences.</p>
<p class="p3"><span class="s1"><b>How does that work exactly?</b></span></p>
<p class="p2"><span class="s2"><b>E.&thinsp;A.&thinsp;K.:</b></span><span class="s1"> We can look into the brain and examine the timeframe during which the brain is planning the movement before sending a signal to the muscles. This can take up to 1.5&nbsp;seconds, sometimes even longer. We can look into this preparation phase and recognise that the person wants to move. And this can only be done through brain signals, not gestures, muscle activity, eye movements, or speech.</span></p>
<p class="p3"><span class="s1"><b>Where are you currently in your research </b></span><span class="s1"><b>project? </b></span></p>
<p class="p2"><span class="s2"><b>E.&thinsp;A.&thinsp;K.:</b></span><span class="s1"> Within the EXPECT project, we are focusing on the possibilities of training on multimodal data and how to use it. For example, to switch between signals when the quality of one signal deteriorates. So, it&rsquo;s not about the general approach, but rather about how we can use different methods to adapt to changing signal qualities.</span></p>
<p class="p2"><span class="s1">We can train with different signals than those we later use. For example, if a patient&rsquo;s muscle activity is not reliable initially, we can use EEG for training and later use muscle activity and eye tracking to improve and enhance performance.</span></p>
<p class="p3"><span class="s1"><b>You use gestures, speech, eye movements, and brain activity&nbsp;&ndash; is there a technology that will dominate in the future?</b></span></p>
<p class="p2"><span class="s5"><b>E.&thinsp;A.&thinsp;K.:</b></span><span class="s6"> That&rsquo;s a difficult question to answer. It depends on how and what you want to communicate. But BCI is better suited for the future than other systems. However, I believe that the quality of interfaces will change in a way that makes them much more natural for us. Ideally, you wouldn&rsquo;t see, feel, or notice an interface. And I believe in multimodal interfaces because that&rsquo;s how we communicate as humans&nbsp;&ndash; with speech, facial expressions, and gestures.</span></p>
<p class="p3"><span class="s1"><b>What developments in semiconductor technology are particularly exciting for you?</b></span></p>
<p class="p2"><span class="s2"><b>E.&thinsp;A.&thinsp;K.</b></span><span class="s7"><b>:</b></span> A group of researchers and engineers at the University of Duisburg Essen is working on terahertz technology. This technology is excellent for recognising the environment. You can see a wall, a window and a corner and even determine whether it&rsquo;s made of wood, stone or plastic. There are many ideas on how this technology can be used to measure biosignals without physical contact &ndash; for example, by measuring muscle movement through the reflection of terahertz waves. And from these movements we can infer what the hand and fingers are doing.</p>
<p class="p2"><span class="s1">The use of graphene to realise epidermal electronics is also interesting. It allows us to measure electrical activity in the muscle with very high resolution. This is not only fascinating for interaction but also to understand muscle-related diseases.</span></p>
<p class="p3"><span class="s1"><b>Is there anything missing in today&rsquo;s available semiconductor solutions for your platform?</b></span></p>
<p class="p2"><span class="s2"><b>E.&thinsp;A.&thinsp;K.:</b></span><span class="s1"> Imagine you want to perform a complex analysis of brain activity. This requires a large amount of data and powerful <a href="https://future-markets-magazine.com/en/encyclopedia/machine-learning/" target="_blank" title="Procedure by which computer systems acquire knowledge independently and can expand their knowledge, allowing them&hellip;" class="encyclopedia">machine learning</a> models. This might be quite challenging to achieve on-site, but research is already focused on implementing these large AI models into small embedded devices. This is also crucial for us because if you&rsquo;re walking around in nature with your exoskeleton and don&rsquo;t have internet access, relying on <a href="https://future-markets-magazine.com/en/encyclopedia/cloud/" target="_blank" title="Provision of IT resources over the Internet on demand, billed according to actual usage." class="encyclopedia">cloud</a>-based AI processing can be problematic. </span></p>
<p class="p2">To integrate the AI model into the system, we need highly energy-efficient computing power. The model must also continue learning in operation. For example, imagine a patient whose signal becomes better over time. The model should recognise this, so the system relies more on muscle activity than EEG.</p>
<p class="p3"><span class="s1"><b>What do you consider important in designing an optimal Human Machine Interface? </b></span></p>
<p class="p2"><span class="s2"><b>E.&thinsp;A.&thinsp;K.:</b></span><span class="s1"> The most important thing is to be open-minded. That means not saying, &ldquo;I&rsquo;m a BCI researcher, so I want to work with brain activity.&rdquo; You should always think about what you want to achieve and how humans would interact.</span></p>
<p class="p2">Secondly, always keep in mind that we&rsquo;re talking about a diverse society. Facial expressions may vary among different nationalities, for example. So, when developing such a system, consider not only the technology but also the social context. For me, it&rsquo;s also crucial to talk to the people who will use the system in the future.</p>
<p class="p3"><span class="s1"><b>Machines that can read human thoughts&nbsp;&ndash; is this the first step towards the dystopia<span class="Apple-converted-space">&nbsp; </span>portrayed in Hollywood where machines gain power over humans?</b></span></p>
<p class="p2"><span class="s7"><b>E.&thinsp;A.&thinsp;K.:</b></span> At the moment, we are not yet at the point where we can truly read thoughts completely. And, to be honest, I don&rsquo;t believe the danger lies in the machine controlling the human. I think the risk is more likely to be another human having access to someone&rsquo;s thoughts.</p>
<p class="p2"><span class="s1">We had a project, for example, where we wanted to develop an EEG-based approach to detect high workload in individuals within a company, to prevent burnout, for instance.</span></p>
<p class="p2"><span class="s1">In this case, you need to prevent someone with malicious intent from accessing the data in the <a href="https://future-markets-magazine.com/en/encyclopedia/cloud/" target="_blank" title="Provision of IT resources over the Internet on demand, billed according to actual usage." class="encyclopedia">cloud</a>. But even if this data is only used to optimise a person&rsquo;s production environment, such as slowing down a robot, it can still have consequences for the person. Because the employer might say, &ldquo;I&rsquo;d rather hire a younger person who can work faster.&rdquo;</span></p>
<p class="p2"><span class="s1">So, understanding a person can also be used to worsen the situation or harm individuals. For example, we can discriminate against people because we discover that they cannot recognise certain things or have a very low attention span. This is already a very real possibility if someone has access to a person&rsquo;s EEG.</span></p>
<p class="p3"><span class="s1"><b>Finally, a look into the future&nbsp;&ndash; you can<span class="Apple-converted-space">&nbsp; </span>now be visionary: How will we interact with machines in 25&nbsp;years? </b></span></p>
<p class="p2"><span class="s2"><b>E.&thinsp;A.&thinsp;K.:</b></span><span class="s1"> I expect that interacting with machines will be very similar to interacting with other people by then. We will interact with systems and speak to them very naturally. These systems will understand what we want. Multimodal interaction will be taken for granted. I believe that in the future, it will be very difficult to discern from the outside whether we are interacting with another human or a machine. </span></p>
<p>The post <a href="https://future-markets-magazine.com/en/innovators-en/predictive-robots-our-future-assistants/">Predictive robots: our future assistants</a> appeared first on <a href="https://future-markets-magazine.com/en/">Future Markets Magazine</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
