MeMouse is an synergistic platform that has been developed utilizing Digital Image Processing Techniques to supply the user with mouse functionalities. Computer Vision techniques are the most cardinal in this scenario. This chapter describes the basic functionalities of the system and the specifications harmonizing to which the system has been developed.
Human Computer Interaction is an of import field today. Research and Development in this field is increasing twenty-four hours by twenty-four hours. Several jobs are being solved and betterments in many already provided solutions are being made. MeMouse besides provides interaction with the computing machine. Idea of interaction has been developed by taking the thought from smart board. MeMouse focuses on allowing the user interact with the presentation without the assistance of mouse and by utilizing a camera. This package focuses on the manner any system can be controlled through defined manus gestures. Hand Gestures take over the functionality of mouse in a computing machine while the user interacts with it, being present at the point of presentation.
While showing, a user needs to travel between the slideshow and hardware control. In our environment, computing machines are placed on the left side of phase and presentation is projected on the right side of the phase. If an teacher, for illustration, wants to explicate a peculiar point, he moves to the projected screen. But if there is a demand to alter slide or execute any mouse operation so he has to come back where the computing machine is placed to execute any operation. This consumes clip and affects the attending of the presenter every bit good as the audience. This incommodiousness needs to be sorted. MeMouse addresses this job in a manner to supply mouse functionalities utilizing Computer Vision Techniques.
Goals and Objective
The aim of MeMouse is to supply hardware free mouse functionalities to the presenter utilizing computing machine vision techniques. The maps that are compulsory to be performed are: individual chink, dual chink and right chink. The end of the undertaking is to first gaining control the picture of the presenter, so to pull out the manus. Hand gestures will be observed. If any gesture fiting the defined gestures appears, the computing machine needs to execute the corresponding operations. All of these operations need to be performed in existent clip environment.
The deliverable for this undertaking is the package that will command the computing machine harmonizing to the gestures of the presenter.
This papers provides basic cognition about MeMouse. First, description of the undertaking has been given and following chapter explains the related work in the field and other package merchandises supplying same functionality has been discussed. This thesis besides includes requirement specification of the package in chapter 3. Design specifications have besides been discussed in chapter 4 of this thesis. Chapter 5 describes the execution inside informations of our undertaking, after that an analysis of the package has been provided and a few suggestions about the sweetenings of the undertaking have besides been provided.
MeMouse is an synergistic platform that provides mouse functionalities under the sphere of Computer Vision. This Chapter provides debut to MeMouse, ends and expostulation which had been set in order to develop this package. A brief amplification of papers contents has besides been added.
This chapter provides an penetration into different engineerings that have already been developed for Human Computer Interaction under different spheres including Signal Processing and other hardware devices. All of these devices of applications provide relevant functionality. An overview of all these engineerings is presented along with a comparing with MeMouse.
The Smart Board synergistic whiteboard is an synergistic whiteboard that uses touch sensing for user input – e.g. , scrolling, right mouse-click – in the same manner normal Personal computer input devices, such as a mouse or keyboard, detect input. A projector is used to expose a computing machine ‘s picture end product on the synergistic whiteboard, which so acts as a big touch-screen. A The constituents are connected wirelessly, via USB or consecutive overseas telegrams [ 1 ] . Figure 2.1 ( a ) and 2.1 ( B ) shows the Smart Board being used. A projector connected to the computing machine displays the computing machine ‘s desktop image on the synergistic whiteboard. The synergistic whiteboard accepts touch input from a finger, pen or other solid object. Each contact with the Smart Board synergistic whiteboard is interpreted as a left-click from the mouse. Other maps are implemented in same manner by utilizing different inputs.
Figure 2.1 ( a ) Figure 2.1 ( B )
Use of Smart in category room and other acquisition environment
MeMouse on the other manus merely needs a camera that has to be connected to the computing machine and its place has to be in such a topographic point where user comes in its ocular field. Hence a big touch screen is non needed to be installed. This reduces the cost of system. A normal Smart Board is available in a few hundred dollars to several thousand dollars. Smart Boardss are touch-sensitive devices. Accidental touch can trip many unwanted operations on screen. MeMouse has less such drawbacks. Hence MeMouse is prioritized over Smart Boards.
Synergistic Whiteboards Using the Wiimote
Wiimote is a distant commanding device that uses an infrared ( IR ) camera to observe the infrared visible radiation beginning. An IR camera is integrated in wiimote to observe energy beginnings. Wiimote can track beginnings of infrared visible radiation ; user can track pens that have an IR led in the tip [ 2 ] . By indicating a wiimote at a projection screen or LCD show, user can make really low-priced synergistic whiteboards or tablet shows as compared to Smart Boards. MeMouse provides even cheaper solution that uses a web camera which is far cheaper than an IR camera. Besides MeMouse does non necessitate any aiding device for sensing of manus or as input. Hence it is better to utilize MeMouse instead than Wiimote.
Figure 2.2 ( a ) Figure 2.2 ( B )
Interaction utilizing a Wiimote
Digital presenter is a little device that has USB like device that is connected to computing machine. Another device that is of same size is connected to it via Bluetooth. Buttons are provided on presenter which triggers limited operations required to alter slides during presentation. MeMouse on the other manus have broader range, non merely slides are controlled but other mouse operations are performed. Hence, MeMouse additions an border over Digital presenters.
Figure 2.3: Digital Presenter
This chapter describes the bing engineerings that provide same functionalities as MeMouse. Smart Boards, Digital Presenters and Wiimote are closely similar to MeMouse, an analysis and comparing of MeMouse has been provided with all engineerings. It has been observed that MeMouse is cheaper than any of these engineerings and provides with same consequences, hence MeMouse has an border over these devices.
This chapter describes the demands specifications for the version 1.0 of Undergraduate grade undertaking of package technology. The thought of the merchandise is a package plan that helps the presenter to interact with the presentation comfortably. It captures the manus gestures of the presenter utilizing a camera and performs the operations of the mouse. So the presenter does non hold to hold hardware ever at his disposal. This papers provides the specification that this package needs to carry through in order to be most utile.
MeMouse has a limited range that is to be able to supply the undermentioned functionalities of mouse as end products:
Slide Show Control
All of these functionalities need to be provided utilizing manus gestures as inputs. Handss need to be tracked and analyzed. If any manus gesture matches the defined gesture, associated operation is performed.
MeMouse needs to hold some specific characteristics that guarantee the serviceability and lastingness of the merchandise. These characteristics are the nucleus functionalities of the system and have been utile in design of the system.
Recognize Hand Gestures
MeMouse can interact with Windows API as a mouse without the usage of existent mouse. User can execute right chink, left chink and tracking on the screen. These operations can be initiated by executing some defined manus gestures that is to maintain manus inactive for a peculiar clip.
MeMouse can observe the tegument of any individual without the assistance of any colour or other hardware. For this, user merely needs to maintain his manus in the ocular field of camera.
MeMouse can acknowledge the manus in the country of ocular field of camera that will happen manus in the country and acknowledge it.
Hand is tracked in the image utilizing anticipations of Kalman Filter. The focal point of the plan is the country where manus is present in order to cut down hunt infinite.
Windows API interaction
MeMouse interacts with Windows API in order to go through messages about mouse operations. Messages are passed to Operating System about executing individual chink, dual chink and right chink.
Premises and Dependences
For the development of MeMouse, there are some premises have been made. This subdivision describes the premises and dependences of MeMouse based on which, the package had to be developed.
It has been assume that manus is the most moving object in our environment. But there is a drawback in utilizing this premise that if any other object holding skin colour moves faster than human manus so MeMouse will sort it as Human Hand. Besides that the manus does non halt motion for up to or more than 2 seconds if any operation public presentation is non intended.
Windows household higher than Windowss 98 is targeted Operating System household. As it is the most used Operating Environment, it has been selected.
System features harmonizing to functional demands provided by the Undertaking Supervisor are as follows:
System needs to capture picture so that human manus can be detected from the user environment. And certain actions can be performed based on the gesture of the manus. User has to originate the plan for picture capturing. After get downing the plan user has to snap “ get down ” button nowadays on the signifier to get down capturing. And the response of the system is to expose the picture as end product.
After picture capturing, manus has to be extracted. So that tracking and gesture acknowledgment can be performed. This characteristic has high precedence because it initializes the system at first and no farther processing can be done without input generated by this faculty. At this phase, user has to interact with camera instead than direct interaction with the plan. Hence response from camera has to be accurate in order to execute the processing on picture.
This is an indispensable portion of MeMouse. It has to be efficient in order to track manus on screen and pointer will travel consequently. Response of this characteristic is to give expected place of the object based on present place and speed of the object.
Gesture Recognition is core demand of MeMouse. The performed gesture has to be classified, after analysis, as legal or illegal. This has to go through message to the chief category to execute the action consequently. Response of gesture acknowledgment has to sort the gesture and direct petition to chief plan for related action.
External Interface Requirements
MeMouse has specific interface demands that have been discussed in this chapter.
MeMouse is an synergistic plan that ever needs to acquire input from the user. User needs to be in the defined country that is the ocular field of the camera. The manus should be seeable in the scenario in order to acquire input. The input can merely be taken if the manus stays in the ocular field when doing a specific gesture. The gestures must be able to execute the mouse operation for the presentation. Hand must move as a mouse. The gestures defined for MeMouse to execute Single Click, Double Click, Right chink and Slide Show Control have been defined.
Besides the manus has to be detected utilizing the belongingss like tegument colour and manus motion, no other helping stuff should be needed to make so.
MeMouse gets input through web-camera that must hold high declaration in order to acquire clear input with less noise.
This merchandise needs to interact with the operating system of the platform through API. Targeted Operating System must include Windows XP and Windows Vista.
Other Nonfunctional Requirements
Certain other functionalities are required based on public presentation and response of MeMouse. These demands are described here.
MeMouse has to be efficient package in footings of response and operation. As the sphere of the merchandise is image use that needs fast processing on the machine along with being efficient. Hence the plan logic and informations flow demands to be in a manner to be most efficient. MeMouse needs to work under the normal lighting conditions, with non-static background, be robust, compatible with the platform and besides react within minimal clip in order to bring forth end product.
Software Quality Attributes
MeMouse has to follow some demands that affect the quality of the system. Quality of MeMouse has to be improved by following the quality demands described in this subdivision.
Runtime System Qualities
At run clip MeMouse has to follow some map in order to supply the user with needed functionalities. As system has to execute its maps in existent clip so, runtime qualities of MeMouse are as follows:
MeMouse must execute the maps like right chink, individual chink and dual chink at any point of the screen.
MeMouse must be able to execute operations in less clip that is acceptable that is within a 2nd.
MeMouse must be available for public presentation all the clip user needs it to. For illustration in the presentation environment MeMouse must be available.
MeMouse has to be user friendly. User must be able to utilize MeMouse in most convenient manner.
Non-Runtime System Qualities
Non-Runtime qualities of MeMouse are those which are required for sweetening in codification or to do MeMouse utile for other developers in heightening the system for other demands and for other environments for which this system can be extended.
MeMouse has to be able to suit alterations that include modifying MeMouse to integrate more gestures. Besides, the package must be able to suit any other maps if any other user, like a coder, wants to integrate.
MeMouse should hold the ability of a system to run under different calculating environments. As the targeted environment for MeMouse is presentation or talk environment but other such environments where user want to utilize the system for personal usage should be covered.
MeMouse applications must to be reclaimable in new applications. If a system is developed which needs the functionalities of MeMouse, MeMouse should be easy to understand that could be implemented in such a manner.
Individually developed constituents of the system have to work right together in MeMouse. Modules of MeMouse must join forces with each other in such a manner to execute in manner to be most utile.
MeMouse must be able to be tested in order to liberate it from mistakes. Different trials including beta testing is necessary in order to take mistakes and do the package perform in conformity with the demands specified.
MeMouse needs to be robust and able to pull off catastrophe state of affairss that arise during operation and hence work in existent clip expeditiously. By catastrophe state of affairs, it is meant that the state of affairs in which unsought inputs are provided to the system. For illustration, if user ‘s manus goes out of the frame, it should be able to pull off to track it when it reappears in the frame.
This chapter describes the demands of the system as described by Project Supervisor. It includes interface, functional and non functional demands along with the chief characteristics required by the system. These demands have been set after look intoing the feasibleness of the system. These demands have been considered as the cardinal rules for proving and standardisation of the merchandise.
This chapter provides with the design specifications of MeMouse. These specifications have been developed utilizing the demands described in old chapter. This chapter provides information sing system construction and architecture.
MeMouse is package that is intended to ease the user while they are presenting presentation. In order to command the presentation there are several manner that includes manual interaction through mouse or keyboard, usage of digital presenter or smart board. This creates a batch of incommodiousness during presentation and distracts the audience which in bends wastes clip. But while utilizing MeMouse the presenter will be able to interact with the presentation at the point where end product is being projected.
Premises and Dependences
Basic premise that has been the footing of development is that presenter ‘s manus is the most moving object in MeMouse ‘s environment. Another premise has been that Windows Operating System released after Windows 98 are used in presentation environment.
For better public presentation the system on which MeMouse runs should carry through demands like holding Pentium 4 or above ( C2D recommended ) , 512 MB RAM ( Recommended ) and Graphics card ( optional )
MeMouse is efficient package that provides end product in different scenarios but there are some conditions that have to be applied in order to acquire serviceability from the system. Following are the restraints that have been applied
The room must hold sufficient visible radiation so that skin colour can be recognized.
Hand should travel more than any other organic structure portion ab initio, peculiarly in 2 seconds at the start of plan.
Full arm shirts are recommended for best public presentation. Cloth colour must non hold colour that matches tegument.
There must non be any other tegument colored object in the background.
Design determinations and schemes that affect the overall organisation of the system have been described here, higher-level constructions of system. Some of import issues are describes in this subdivision like linguistic communication, platform and undertaking extensions.
C # Platform:
C # has been used to develop MeMouse. Main ground to utilize C # is that the application is being developed in real-time and requires fast executing. Another ground is that it is widely being used for Image processing techniques. It is better than MATLAB because MATLAB is efficient for use of still images and does non bring forth efficient consequences when used in real-time and pictures.
An unfastened beginning library AForge.net is being used for image processing techniques. It is the best unfastened beginning library for image processing with C # . Another pick is Open CV that is non being used because of holding some compatibility issues when utilizing with C # .
At present, chief focal point is to command slideshows utilizing manus gestures. But there is a program to widen the functionality to take over mouse functionalities of computing machine interaction utilizing gestures. Besides, to construct an application integrated with MeMouse which will move as synergistic whiteboard.
To execute mouse operations utilizing MeMouse, manus trailing and gesture acknowledgment is necessary. In order to execute manus gesture acknowledgment there are certain stairss which have been followed. For that, user needs to be present in the ocular field of camera. His manus has to be extracted and tracked. Gestures that manus shows are to be analyzed. Keeping in position these stairss MeMouse was divided into five faculties Video gaining control, Hand extraction, Tracking, Gesture acknowledgment and Control Windows message passing.
The flow of informations throughout the system has been shown graphically in figure 4.1. When MeMouse starts its executing, it captures the picture of presenter foremost. All processing depends upon the picture that has been captured in existent clip. The input has to be manus gesture therefore manus is extracted from the picture which was converted into frames before this operation public presentation. Following measure is to track manus, in order to travel pointer and happening following input. If manus shows defined gesture, a message to Windows API is passed to execute a peculiar operation. All of these faculties have been explained in item in this subdivision afterwards.
Hand extraction is non a standalone and individual undertaking instead it is a series of different undertakings which are Skin sensing, Edge sensing, Motion residue and Blob determination. After acquiring the resulted images from skin sensing, border sensing and gesture residue a logical AND is applied on these images. The resulted blob is considered as the manus.
Figure 4.1: Control flow diagram
After placing the manus, it is being tracked. Kalman filter is used to foretell the following place of the manus based on the present place. When following place is predicted tegument colored objects are identified in the squeezed window ( that is the manus ) .
Gestures that had to be defined were supposed to be most convenient to utilize. Hence, clip dependent gestures have been defined. System clock puts a cheque about the clip manus is kept inactive on a peculiar country of screen. Presentments appear about snaping option and operation is performed after manus moves after being kept inactive for a peculiar clip.
The usage instance diagram of the system has been given below. This diagram describes the interaction of user and the system.
Basic Flow of System
User places his manus in forepart of camera ; camera takes the image and sends it to nip board where it is saved. System captures new frame and sends a transcript of frame to observe tegument ; skin module sends the extracted image back to the cartridge holder board. System sends another transcript to observe borders ; faculty returns extracted borders in frame. System sends current and old frame to gesture residue faculty to cipher gesture in subsequent frames ; faculty returns a frame with difference of both the frames.
Fig 4.2 Use Case Diagram
System performs Logical AND of all three returned frames ; so Blog numeration is done to pull out the maximal web log from the resulted frame which is ‘Hand ‘ of the user. Systems sends centre of manus to filtrate to foretell possible place of manus in following frame and besides sends manus form for gesture acknowledgment. A filter takes place and predicts following place. Gesture acknowledgment faculty processes the form to happen out the gesture ; if gesture found system sends message to windows API to execute action, else following frame is captured. Following frame is searched in a restricted country obtained from Kalman Filter end product.
System has performed the action which was requested by user through manus gesture.
There is no surrogate scenario because the aim is defined and merchandise is being developed by purely following the demands.
Class Diagram of MeMouse has been presented as Figure 4.3 and elaborated in this subdivision. All the categories in the diagram are described briefly. And a fable is provided in the diagram to depict the symbols that have been used and their intent.
Figure 4.3: Class Diagram of MeMouse
As it has been shown in Figure 4.3, MeMouse is the chief category. This is the category which controls all other categories and interacts with them in order to execute required functionality. When user foremost starts the plan, user ‘s direct interaction with the plan is over now it is the duty of MeMouse category to transport out farther actions and processs.
MeMouse category shifts the control to WebcamCapture category to capture picture and acknowledge gesture. WebcamCapture category contains the objects of KalmanProcessing KalmanProperties and MotionDetector3. These objects are used to interact with the several categories.
KalmanProperties category contains the information and maps which are required for foretelling the path for an object in the scene. It merely defines informations variables and makes it available for other categories i.e. KalmanProcessing to be used for anticipation of traveling object in the frames.
This category is responsible for foretelling the path of object and object itself in the scene. It contains methods for anticipation of following place of the object.
It is the execution of interface IMotionDetector. This category contains methods for treating frames and extraction of manus in a frame. It besides has the object of vision category which is used to name the maps like skindetection, edgeDetection and gesture residue and others from vision category.
Vision category has necessary methods for happening the needed properties in any image, from skin sensing to gesture sensing and logical AND of electronic image images.
MeMouse has to execute operations in existent clip environment that is why it has to be decently designed to better efficiency. This chapter elaborated the design of the package in conformity with the premise and restraints that have been applied for development. Class Diagram, Data Flow Diagram and Use Case have been added and explained in order to hold better apprehension of system functionalities.
This chapter provides with the sum-up of different attacks used by people to turn to the job statement of MeMouse. All of these attacks are utile but differ in efficiency and response. Complete system of MeMouse has been subdivided into five faculties or subsystems based on old engineerings and squad attempt. These faculties includes: Video Capturing, Hand Extraction, Hand Tracking, Gesture Recognition and Windows API Interaction. Hand Extraction portion can be farther subdivided into Skin Color Detection, Edge Detection and Residue Image. Gesture Recognition can be farther subdivided into Time-based Gestures and Up/Down Gestures.
The first and the most of import measure to get down with MeMouse is to capture picture for existent clip processing. As the package will hold to execute operations harmonizing to manus gestures instead than sneak itself. These manus gestures are to be captured or have to be seen. So picture of the presenter is to be captured and converted into a format that can be operated on by other system faculties. As we know that a camera captures video at different rates depending upon quality and declaration of camera. We are utilizing a web camera because it is inexpensive in cost and care. The picture that is captured can be converted into images. As C # platform does non give any functionality to change over the picture in useable informations, there is a demand to utilize some other tool or technique that do so. There are two tools we considered that provide these functionalities that are AForge.NET and OpenCV. Both of these tools are available as unfastened beginning package.
OpenCV has chiefly been written in C and provides Digital signal processing portability. Its negligees for C++ , C # and Java are available. As C # is our development linguistic communication, so it is the chief focal point to see the compatibility with C # . Program needs to change over the picture captured in.avi format into images. In this scenario, OpenCV does non supply some of import maps that were required. Hence there is mutual exclusiveness of OpenCV with C # programming interface.
AForge.NET is besides an unfastened beginning library for image forging. Unlike OpenCV, AForge.NET is a C # model designed for developers in the field of computing machine vision and unreal intelligence [ 3 ] [ 4 ] . Hence, provides complete functionality for image use in C # programming environment. AForge.NET was chosen for “ Video Capture ” faculty of MeMouse. AForge.NET had to be used for existent clip image use. The image processing maps by AForge.NET can be utilized by utilizing the library AForge.Imaging. AForge.Imaging is a library for image processing modus operandis and files. It can be used to change over the picture in the image frame harmonizing to the frame gaining control rate. As MeMouse has to treat images in existent clip utilizing C # , Aforge.NET provides the best solution for the image capturing and use.
Before the existent processing starts, human manus demands to be detected. MeMouse depends upon the motion of custodies and manus gestures. Hand Extraction techniques help in extraction of human manus from the image and minus of background image. This attack has been developed utilizing image cleavage techniques that include Skin Color Detection, Edge Detection and Motion Detection techniques. These techniques, when put together, give us Human Hand in the picture. Hence, skin sensing techniques could be used in order to observe custodies. But as our system needs existent clip consequences so, the usage of techniques provided by Khurshid and Vincent and Askar et Al. [ 5 ] [ 6 ] have been used in coaction that usage manus cleavage techniques in existent clip.
Skin Color is the most of import belongings through which we can observe manus. Skin has a specific scope of colourss changing from part to part every bit good as lightning conditions. There are besides assorted theoretical accounts that can be used to happen skin colour that include RGB, normalized RBG, TSL, YCrCb and HSI Color Spaces. All of these attacks are efficient in different scenarios. The most of import two attacks are RGB and HSI. RGB infinite can be used in this scenario but the chief disadvantage of utilizing this attack is that it detects a scope of many other colourss that are non required but the system. If we implement it with betterments that do non give mistakes, efficiency of the system decreases. HSI does non demo inefficiency in his instance. It is proven that irrespective of different races human colour falls into the finite subset of HUE value. Keeping this in position and based on experimental consequences, HSI colour infinite for skin sensing has been chosen. Literature shows the scope of HSI values for human organic structure falls into finite subset of existent values. This information was the footing of execution of skin colour sensing bomber faculty. Skin Color values of chromaticity and impregnation are more of import factors to be considered. Using HSI theoretical account, better consequences have been achieved.
Table 5.1: HSI values used
4 to 35
Intensity ( aglow )
0 to 0.7
0 to 0.8
Using merely Skin Color sensing technique, manus can non be detected expeditiously. Major restraint that implies is that face is besides detected in this scenario. So other techniques are besides needed to be used in coaction with Skin Detection. There is besides a restraint that applies in utilizing this attack is ubiquity of tegument colored objects in the background because those objects will be considered every bit tegument as good. Resulted image is converted into binary image for farther use.
Along with Skin Detection, another technique relevant in this instance is Edge Detection. Separation of manus from other objects is made easy by happening borders. It is an of import technique for happening any object in the given infinite besides called as image cleavage. Edges of the object vary depending upon the form of the object. A Edge Detection is used for characteristic detectionA andA characteristic extraction, which aim at placing points in aA digital imageA at which theA image brightnessA alterations aggressively or more officially has discontinuities. There are assorted techniques that can be used for Edge Detection in images or pictures. These techniques include Sobel Operator, Differential Edge Detection, Canny Edge Detector, PrewittA operator andA methods by Roberts cross. Applying Threshold is another technique that can be applied in a manner to happen borders in the system. Based on these techniques, different methods have been used to fnd borders in the picture. Most accurate consequences have been obtained by utilizing Sobel Operator. Resulted image is converted into binary image for farther use.
Sobel Operators [ 7 ] that have been used are shown is Figure 5.1.
Figure 5.1: Sobel Opertors
Presenter uses his hands the most during presentation. Hence hands become the most moving object in the picture. So in order to observe custodies another belongings of manus motion is utile. For this purpose residue of image is found that detects gesture in sequence of frames. Difference between two frames can be taken in order to observe gesture. Two images are considered as matrices and nomadic objects are found by analyzing the grey degree alterations in the picture sequence. Let Fi ( x, Y ) be the ith frame of the sequence, so the residue image Di ( x, Y ) is a binary image formed by the difference of ith and ( i+1 ) Thursday frame to which a threshold is applied. This has been done in order to pull out gesture from complex backgrounds.
Consequences of ‘Skin Detection ‘ , ‘Edge Detection ‘ and ‘Residue Image ‘ collaboratively gives the Hand extracted from the picture. As all three images are binary, common consequence countries in three images is manus extracted in the picture sequence. All images collaboratively provide a ‘Combined ‘ image [ 5 ] . We find the largest contour country and its centre and so pull a bounding box of fixed breadth and length which represents the manus part we were looking for. Further operations are to be performed on the country edge in this box.
Trailing of manus is an of import portion of MeMouse. Tracking algorithm shows the pointer its place on screen. Heuristic hunt in complete frame for happening manus makes the package inefficient. Hence, some technique is needed to be used that makes tracking efficient. Kalman Filter [ 8 ] is used to foretell the following place of traveling object utilizing basic equation of gesture. This reduces the hunt infinite in frames. Kalman Filter can be implemented utilizing 2nd equation of gesture that is:
xk = Axk- 1 + Buk-1 + wk-1
The Kalman filter estimates a procedure by utilizing a signifier of feedback control: the filter estimates the procedure province at some clip and so obtains feedback in the signifier of ( noisy ) measurings. As such, the equations for the Kalman filter autumn into two groups: clip update equations and measurement update equations. The clip update equations are responsible for projecting frontward ( in clip ) the current province and mistake covariance estimations to obtain the a priori estimations for the following clip measure. The measuring update equations are responsible for the feedback, that is, for integrating a new measuring into the a priori estimation to obtain an improved a posteriori estimation.
Hence, clip update equations are predictor equations, while the measuring update equations are corrector equations. Indeed the concluding appraisal algorithm resembles that of a predictor-corrector algorithm for work outing numerical jobs. Time Update equations of Kalman Filter are:
x1k = Ax1k-1 – Buk-1
P1k = APk-1AT+ Q
Measurement Update equations of Discreet Kalman Filter are:
Kk = Pk.HT ( HPk. HT +R )
x2k = x1k + Kk ( zk – Hx1k )
P2k = ( I – KkH ) . P1k
After each clip and measuring update brace, the procedure is repeated with the old a posteriori estimations used to project or foretell the new a priori estimations. The Kalman filter recursively conditions the current estimation on all of the past measurings.
User needs to demo a manus gesture that triggers the mouse operation on the computing machine. There can be several types of gestures that can be used. Time dependent gestures are the simple in execution every bit good as convenient for the user. Hence a timer has been integrated in order to acknowledge user ‘s input. Time restraint has been added depending upon the use of the input. User needs to maintain his manus atmospherics in MeMouse ‘s scenario in order to originate mouse operation. As individual chink is most often used, therefore merely 2 2nd clip is required to trip this operation. After that, dual chink is the most frequent mouse operation, for which 4 seconds timer has been set. And 6 seconds for right chink. User is informed if he keeps his manus atmospherics for a peculiar clip, and options are notified on screen. When user moves his manus after maintaining it inactive, operation is performed harmonizing to input.
Windows API Interaction
Finally there is a demand to interact with the Windowss API that will trip operations on the computing machine. This can be done utilizing the maps defined in Windows API [ 9 ] . The maps we need the most for the usage of mouse include trailing of mouse pointer, pull offing individual and dual chinks. A category for Windows interaction has been added to execute the nucleus functionality for the system.
Implementation inside informations of MeMouse have been discussed in this chapter. Techniques like tegument colour sensing, border sensing, gesture sensing, manus extraction and Kalman filter have been discussed. Procedure starts with video gaining control, infusions manus utilizing gesture, skin colour and border sensing. After that trailing is performed in order to travel the pointer and execute maps to capture inputs. Depending upon clip, gestures are recognized and operations are performed consequently.
Consequences and Analysis
MeMouse has been developed to work in existent clip environment. This is a manner to command presentation in existent clip utilizing computing machine vision techniques instead than individual processing and/or other similar ways that includes touch sensitive or utilizing devices that operate utilizing radio signals or Bluetooth.
MeMouse has been developed in a manner to ease users while presenting a presentation. The thought has been to ease the user feel comfy by non doing him utilize any aiding device or stuff. The milepost has been achieved to command slideshow during presentation. A series of snapshots presented in this chapter gives better apprehension of the consequences we achieved.
Hand Detection is an of import milepost, for which image processing techniques have been used. These techniques help in the sensing of manus to execute farther operations. Figure 6.1 shows the manus sensing during the operation of MeMouse. A rectangular window covers the country where manus is detected by MeMouse. This has been achieved by utilizing Skin sensing, EDGE Detection and Blob Finding and so acquiring a combined image that gave the similarity that is manus. As the Figure 6.1 shows, manus has been detected which is the major milepost to happen input in the scenario.
Figure 6.1: Hand Detection and tracking in MeMouse
Trailing of manus is another of import issue that has been achieved in efficient manner. Using Kalman Filter trailing has been made better. Onscreen trailing is an of import measure as it spots the point where user needs to execute operation. Trailing of manus shows the end product on screen at the same time. As the manus moves, pointer moves on computing machine screen.
Figure 6.2: Indication for Single Click
Inactive Gestures have been added in the system to execute any operation. If the user keeps his manus atmospherics for 2 seconds, user is notified that if he moves his manus now, Single Click operation will be performed. Figure 6.2 shows the indicant of Single Click to the user. User has his manus atmospherics for 2 seconds now. But if user wants to execute Double Click Operation, he has to maintain his manus atmospherics for 2 more seconds. Overall 4 seconds is the clip for which user needs to maintain his manus atmospherics in order to execute Double Click operation. Figure 6.3 show that user is notified about that if he moves his manus now Double Click operation will be performed.
Figure 6.3: Indication for Double Click
If the user needs to execute Right Click on screen, 6 seconds is the minimal clip the user has to maintain his manus atmospherics. After that an indicant appears on screen that if user moves his manus, Right Click operation will be performed. Figure 6.4 depicts the operation performed if user keeps his manus atmospherics for 6 seconds. Like old two instances, a presentment precedes the public presentation of Right Click operation. User normally needs to execute Single Click after executing Right Click. Figure 6.5 show that user has performed ‘Refresh ‘ operation by Right Clicking on desktop and so Single Clicking on Refresh check.
Figure 6.4: Right Click operation performed
Figure 6.5: User performed Refresh operation
Other techniques that are used to command slide show includes Smart Boards, Wiimote and Digital Presenters. These are the devices that need more hardware support than MeMouse.
Digital Presenter is a device, connected via Bluetooth Technology to its other half in order to supply limited functionalities of Mouse. User needs to attach a Bluetooth linking device with the computing machine in order to execute operations. The hardware required is expensive every bit good as it costs more if any mistake happens. Digital Presenter controls the slides during presentation but can non execute if any other operation is to be performed like opening any other plan. User is restricted to a peculiar plan. Whereas MeMouse can supply more maps that is Mouse functionality that can be used to open and shut other plans as good during presentation.
Smart Board is a utile device. It has similar functionality as of MeMouse, but it has to hold a big touch screen in order to execute Mouse operations. The equipment of Smart Board is rather expensive that is hard to buy for many institutes and organisations. Hence, MeMouse provides a cheaper solution to the same job. MeMouse provides the functionality of Mouse operations in existent clip which makes MeMouse a better option than a Smart Board.
Wiimote is a device that is uses IR camera in order to execute the maps. Besides it needs the user to utilize a pen-like device that has Light Emiting Diode ( LED ) at its tip. When that LED emits visible radiation, Wiimote tracks it and execute operations. On the other manus, MeMouse does non necessitate any helping stuff and provides the same functionalities in footings of Mouse operations. Besides, Wiimote is expensive device than a webcamera that we use for MeMouse in order to acquire input. Hence, MeMouse is a smart pick.
MeMouse can execute the operations like Singe Click, Double Click and Right Click which in bends helps to command the Slide Show. Some of these devices provide even less maps than MeMouse being expensive as compared to MeMouse. Functionality provided by MeMouse is same as other engineerings similar to it in much cheaper and therefore efficient manner.
Decision and Future Work
This chapter describes the overall accomplishments of MeMouse. Besides, some suggestions have been presented n order to heighten the system and for up step of MeMouse. MeMouse can be extended cleverly in order to cover a broader sphere of Human Computer Interaction.
Concept of MeMouse was developed by the construct of Smart Boards and Digital Presenters. These devices are expensive every bit good as demands more disbursals for their care. Besides, these devices are hardware dependent whereas MeMouse provides the usage with freedom and non to be dependent upon hardware. MeMouse provides more convenient and utile environment in order to execute same maps which are required while presentations.
MeMouse is merely an induction of a broad field of Human Computer Interaction during presentation. It can accomplish a batch of mileposts which can do a presenter feel more comfy with the environment while showing.
MeMouse can be extended to supply on-screen keyboard to the users. That can be done by specifying co-ordinates harmonizing to keyboard in picture capturing country. This can allow the user even process paperss on-screen.
There is another suggestion to widen the package in order to observe human manus gesture and captures it to compose on projected screen. This thought is similar to composing on white board. Techniques to track the input object along with algorithms to heighten ability to larn of package can be used. Artificial Intelligence techniques can be most utile.
MeMouse can besides take over computing machine control in every manner for which user can utilize it. Digital image treating techniques can be used along with Artificial Intelligence to develop a complete computing machine system.
All of these techniques have to work with Windows API to execute coveted executions.
Human Computer Interaction can be made more and more convenient. Computer Vision and Digital Image Processing has to work in coaction with Artificial Intelligence and miracles can be made world. There is a broad scope of option available in order to do Human Computer Interaction more and more comfy.