انجام پایان نامه

درخواست همکاری انجام پایان نامه  بانک مقالات رایگان انجام پایان نامه

سفارش پایان نامه

|

انجام پایان نامه ارشد

 پایان نامه 

پایان نامه‏ کامپیوتر

انجام پایان نامه‏ ارشد کامپیوتر

Brain-computer interface
From Wikipedia, the free encyclopedia
Jump to: navigation, search
A brain-computer interface (BCI), sometimes called a direct neural interface or a brain-machine interface, is a direct communication pathway between a human or animal brain (or brain cell culture) and an external device. In one-way BCIs, computers either accept commands from the brain or send signals to it (for example, to restore vision) but not both.[1] Two-way BCIs would allow brains and external devices to exchange information in both directions but have yet to be successfully implanted in animals or humans.
In this definition, the word brain means the brain or nervous system of an organic life form rather than the mind. Computer means any processing or computational device, from simple circuits to silicon chips (including hypothetical future technologies such as quantum computing).
Research on BCIs began in the 1970s, but it wasn't until the mid-1990s that the first working experimental implants in humans appeared. Following years of animal experimentation, early working implants in humans now exist, designed to restore damaged hearing, sight and movement. The common thread throughout the research is the remarkable cortical plasticity of the brain, which often adapts to BCIs, treating prostheses controlled by implants as natural limbs. With recent advances in technology and knowledge, pioneering researchers could now conceivably attempt to produce BCIs that augment human functions rather than simply restoring them, previously only the realm of science fiction.
 
We're still  in the extremely early days of neural computing interfaces, but make no mistake about it, when it comes to directly connecting our brains to our hardware, we're ready and rearin' when the gear is. And lucky us, apparently at least one such system will be shown this week at CeBIT developed by none other than Fraunhofer: the aptly and succinctly dubbed Brain Computer Interface (we'd prefer something a little snappier, say, like the Computer Brain Interface, but whatevs). The system reads brain-waves from 128 scalp electrodes -- very slowly, mind you -- and apparently over the last couple of years they've already honed the device to control a pointer and enable trained users to actually write a sentence with their mind alone (even though it may take between five and ten minutes to do so).


[Via Popgadget]
Project Overview
The long-term objective of this research is to create a multi-position, brain-controlled switch that is activated by signals measured directly from an individual’s brain.  We believe that such a switch will allow an individual with a severe disability to have effective control of devices such as assistive appliances, computers, and neural prostheses in natural environments.  This type of direct-brain interface would increase an individual’s independence, leading to a dramatically improved quality of life and reduce social costs.  
 
Most often the greatest failing of technical aids for persons with severe physical disabilities is the inadequacy of the human-machine interface. With a universal, effective and efficient interface, current technology has the capability of providing substantial independence and hence, a greatly improved quality of life for even the most severely disabled persons. In pursuit of such an ideal interface, researchers have been studying the feasibility of utilizing electrical brain potentials to directly communicate to devices such as a personal computer system.
 
Dr. Gary Birch, an Adjunct Professor at the Dept. of Electrical and Computer Engineering at UBC and the Executive Director of the Neil Squire Foundation, has spent the last ten years working with other researchers to develop such a direct brain to machine interface.
 
"It was clear to me that the weakest link in utilizing technology to help people with disabilities is the human machine interface. It is the ability of someone with a disability to be able to control the technology that is the limiting factor, not the technology itself. The ideal interface would be to tap directly into the brain signals."
 
The technology that we have developed to date is based on methods to detect user-generated patterns in the user’s EEG related to imagined movements.   This research is being pursued in three streams:
1)      development of new brain-computer interface technology;
2)      evaluation of BCI technology across different user populations and under varying conditions; and
3)      development of consumer-ready electrode arrays and DSP hardware platform.
 
Financial Support
This project has been made possible by support from the Natural Sciences and Engineering Research Council of Canada, Grant 90278-96, the Rick Hansen Neurotramuma Initiative, Grant 99031, and by the Government of British Columbia’s Information, Science and Technology Agency.
 
Progress
Prior to Sept. 1999, the BCI research team had developed a single-position, brain-controlled switch that responds to specific patterns detected in spatiotemporal electroencephalograms (EEG) measured from the human scalp.   We refer to our initial design as the Low-Frequency Asynchronous Switch Design (LF-ASD) [2].   Our initial evaluations of the LF-ASD had demonstrated that it was capable of detecting actual motor potentials in able-bodied subjects.  This provided the necessary ground for advancing towards the next stage of the research, which was to test the system’s ability in detecting imagined motor potentials in able-bodied individuals (our control population) and individuals with spinal-cord injuries.
 
Recently our work has focused on verifying LF-ASD function when it is driven by EEG patterns related to imagined movements.  This work has taken place at our new experimental recording site at the GF Strong Rehabilitation Center.   We are the first research laboratory to attempt to recognize self-paced, imagined movements for a BCI and as such there was no existing protocol for these types of evaluations.  We have invested several months developing and testing a suitable experimental methodology (and related equipment) for evaluating the LF-ASD driven by self-paced, imaginary movements.  The methodology and equipment were refined in studies involving 6 pilot subjects.  The methodology is summarized in [8].  
 
One first on-line study with imagined movements demonstrated that able-bodied subjects using imaginary movements could attain equal or better control accuracies than able-bodied subjects using real movements [8].  Two able-bodied subjects (participating in two sessions each) used imagined finger movements to activate the LF-ASD and trigger events in our experimental video game.  These two subjects demonstrated activation accuracies in the range of 70-82% with false activations below 2%. These accuracy rates were encouraging and were comparable to accuracies using actual finger movements, which were observed in the range 36-83%.  In terms of overall correct decisions, given that the system was making a classification every 1/8 of a second over a period of an hour, the average classification accuracy was over 99%.  We are currently verifying this performance in a large population of subjects.
 
We have completed a second study, which demonstrated that subjects with high tetraplegia could activate the LF-ASD at levels similar to able-bodied subjects  [9].   Two subjects, one C4-C5 and one C5-C6, demonstrated activation accuracies in the range of 44-55% while maintaining false activations below 1%.  During this study, we also collected pilot data from the subjects using imagined foot movement to activate the LF-ASD.  Using imagined foot movements, the subjects demonstrated the same level of accuracy as with imaged finger movements.  If our subjects continue to demonstrate this level of control, we will be able to confidently use the LF-ASD (with our current methods) to capture and study single-trial imagined, voluntary movement-related potentials (IVMRPs) in SCI subjects.  This will be a critical tool to improve our understanding of the characteristics of single-trial IVMRPs.  With a better understanding of IVMRPs, we should be able to improve the activation accuracy of the LF-ASD by improving its design.   Note, the control accuracies reported above are all based on a single configuration of the LF-ASD (i.e., a single set of switch parameters).   A preliminary, off-line study with one subject has indicated that after customizing a subset of the LF-ASD parameters, the activation accuracy increased by 10% and false activations decreased by 67%.  In the future we expect to see significant improvement in classification accuracy with customization of the full set of LF-ASD parameters, subject training, and improvements to the LF-ASD design.  We have already begun exploring methods for automatic, on-line customization. We are currently verifying this performance in a large population of subjects.
 
There is still a great deal of work and several significant problems must be overcome before an interface of this nature is at a stage where it can be used practically.   Despite the fact that there is still several years of work to be carried out, we believe that this concept of mapping imaged motor potentials from persons with severe disabilities to the control of technical aids represents a realistic approach towards a direct brain interface system that utilizes activity related to self-initiated cognitive processes.
 
Related Publications
Refereed Journals
[1]         S.G. Mason and G.E. Birch. A General Framework for Describing Brain-Computer Interface Design and Evaluation, revised and resubmitted to IEEE Trans. Rehab. Engineering, 2000.
[2]         S.G. Mason and G.E. Birch. A Brain-Controlled Switch for Asynchronous Control Applications, IEEE Trans. Biomedical Engineering, 47(10), 1297-1307, 2000.
[3]         G.E. Birch and S.G. Mason. Brain-Computer Interface Research at the Neil Squire Foundation, IEEE Trans. Rehab. Eng., 8(2), 193-95, 2000.
[4]         S.G. Mason, G.E. Birch and M.R. Ito. Improved Single-Trial Signal Extraction of Low SNR Events, IEEE Trans. Signal Processing, 42(2), 423-426, 1994.
[5]         Birch, G. E., Lawrence, P. D., and Hare, R. D., Single Trial Processing of Event Related Potentials Using Outlier Information, IEEE Trans Biomed Eng, vol. 40, no. 1, pp. 59-73, 1993.
[6]         Birch, G. E., Lawrence, P. D., and Hare, R. D. Single-Trial Processing of Event Related Potentials.  The Journal of Psychophysiology, 1988.
Refereed Conferences
[7]         S.G. Mason, Z. Bozorgzadeh and G.E. Birch. The LF-ASD Brain-Computer Interface: On-line identification of imagined finger flexions in subjects with spinal cord injuries.  Proceedings of the ASSETS 2000 (ACM), Washington, USA, Nov. 2000.
[8]         Z. Bozorgzadeh, S.G. Mason and G.E. Birch. The LF-ASD BCI: On-line Identification of Imagined Finger Movements in Spontaneous EEG with Able-Bodied Subjects.  Proceedings of the ICASSP 2000 (IEEE), Istanbul, Turkey, July 2000.
[9]         D. Lisogurski and G.E. Birch, Identification of Finger Flexions from Continuous EEG as a Brain Computer Interface, Proceedings of the IEEE Engineering in Medicine and Biology Society 20th Annual International Conference, Hong Kong, 1998.
Other Conferences
[10]     S.G. Mason and G.E. Birch. Processing of single-trial, movement-related ERPs for human-machine interface applications, Proceedings of the RESNA '95 Annual Conference, 673-675, June 1995.
 
 
Equipment and Facilities
 
Our research is conducted primarily at G.F. Strong Rehabilitation Centre and the Department of Electrical and Computer Engineering at U.B.C.  
 
Our lab at G.F.Strong is used for on-line evaluation of the LF-ASD and for data collection and analysis for off-line studies.  The lab is equipped with multiple workstations using a variety of tools including, Matlab/Simulink/Real-Time Workshop, MS Visual C++ and MS Office on Unix and Windows platforms.
Technology Development
One of the most significant obstacles that must be overcome in pursuing the utilization of brain signals for control is the establishment of a signal processing method that can extract event related information from a real-time EEG.   Our lab specializes in advanced, real-time statistical signal processing techniques, including robust, time-series methods, pattern recognition methods, and various custom and standard transformations (including Wavelet Transforms and Time-Frequency Transforms) for data analysis.  Our data analysis and processing environment is predominantly Matlab and C/C++.
 
Although we have made advances, there are many outstanding technical issues that remain to be studied before this technology can be used in natural environments.  These include exploring methods to improve the activation accuracy of the switch, developing adaptive methods to automatically customize the switch to a specific individual, exploring ways to extend the switch functionality, exploring the nature of motor potentials in persons with disabilities, and developing methods to deal with eye and movement artifact contamination of the EEG signal.



On-line Technology Evaluation
Most of the technology development is evaluated off-line on historical data from our signal database.  The true test of a technology, however, is how it behaves in real-time with a user connected.    A significant portion of our research (Stream II) is dedicated to on-line usability studies of the various BCI technologies.  These studies are used to determine switch performance and reliability and to determine how well people can adapt to a particular interface technology.  They are also used to identify how the performance varies with operating factors such as attention, stress, frustration and fatigue.
 
A Typical Usability Study: Subjects are positioned with their eyes 100 cm from the visual display.  The evaluation task is to control a custom video game with their brain activity.  A screen shot of one of our games shown to the left.
 
Each subject wears an ElectroCap™ electrode cap connected to a custom SA Instruments BIOAMP signal amplifier.  Bipolar EEG signals are recorded from electrodes located over the SMA and primary motor cortex.  The subjects use a Sip & Puff switch to report errors during the sessions.
 
All EEG signals (plus EOG, EMG and a user activated Sip & Puff switch) are sampled at 128 Hz by a PC equipped with a 12 bit analog to digital converter.  These signals are fed into our experimental control system via the hardware arrangement shown below.  The signals are interpreted by the LF-ASD algorithms inside the computer to determine when the subject is attempting to activate the switch.  The experimental control system is configured for the particular task being used in the evaluation.  All our control programs are generated by Real Time Workshop from Simulink models and C/C++ using MS Visual C++ 6.0.  Analysis of data is mostly done within our Matlab environment.

 
Prior to the evaluation sessions, subjects are given an initial orientation session, followed by one or more training sessions.  After training, the subjects participate in two or more evaluation sessions where they are asked to complete the experimental task(s).  Performance statistics are recorded and analysed for each session and across sessions and subjects.
 
No matter how many spy cams and unmanned surveillance drones government and law enforcement officials can pack into public spaces, their utility has traditionally been limited by the finite amount of footage human monitors can review in a given time frame. New DARPA-sponsored research out of Columbia University, however, may soon allow folks tasked with keeping an eye on video feeds to perform their jobs up to ten times faster -- by leveraging the rapid image processing abilities of cortical vision. Since people are able to recognize suspicious activity much more quickly than they can consciously identify what's wrong, professor Paul Sajda and his team developed a computer-brain interface device -- similar to ones we've seen control an on-screen cursor and bionic limb -- that monitors an operator's neural output while he/she is watching streaming footage, and tags specific images for later perusal. Once the technology is perfected in the coming months (it still emits too many false positives, apparently), it could allow for more thinly-staffed monitoring departments, though we suspect it will probably just convince officials to deploy more and more cameras.
Model train controlled via brain-machine interface
                                                            Hitachi has successfully tested a brain-machine interface that allows users to turn power switches on and off with their mind. Relying on optical topography, a neuroimaging technique that uses near-infrared light to map blood concentration in the brain, the system can recognize the changes in brain blood flow associated with mental activity and translate those changes into voltage signals for controlling external devices. In the experiments, test subjects were able to activate the power switch of a model train by performing mental arithmetic and reciting items from memory.
The prototype brain-machine interface allows only simple control of switches, but with a better understanding of the subtle variations in blood concentrations associated with various brain activities, the signals can be refined and used to control more complex mechanical operations.
In the long term, brain-machine interface technology may help paralyzed patients become independent by empowering them to carry out actions with their minds. In the short term, Hitachi sees potential applications for this brain-machine interface in the field of cognitive rehabilitation, where it can be used as an entertaining tool for demonstrating a patient’s progress.








انجام پایان نامه

انجام پایان نامه کامپیوتر، انجام پایان نامه ارشد کامپیوتر، انجام پایان نامه، پایان نامه

برای دیدن ادامه مطلب از لینک زیر استفاده نمایید

 

سفارش پایان نامه

نقشه