Difference between revisions of "EE586L/Projects 2012"
(→eMuffler) |
|||
Line 1: | Line 1: | ||
+ | == AudioSense == | ||
+ | |||
+ | '''Authors:''' [mailto:nasery@usc.edu Varun Nasery], [mailto:rkhaladk@usc.edu Rohit Khaladkar], [mailto:savadi@usc.edu Gaurav Savadi] | ||
+ | |||
+ | '''Abstract:''' ''A gesture recognition based audio system is implemented. Hand gestures are used to increase/decrease volume, change tracks, mute/unmute the track, play, pause and time seek.'' | ||
+ | |||
+ | '''Video:''' [TBD] | ||
+ | |||
+ | |||
== Digit(al) Calculator == | == Digit(al) Calculator == | ||
Line 31: | Line 40: | ||
'''Abstract:''' ''The objective of the project is to cancel the noise from noisy speech signal using adaptive noise cancellation technique. In this project we use normalized Least Mean Square(NLMS) algorithm to cancel out the noise,which is fed through reference mic, from the noisy speech signal fed through primary mic.'' | '''Abstract:''' ''The objective of the project is to cancel the noise from noisy speech signal using adaptive noise cancellation technique. In this project we use normalized Least Mean Square(NLMS) algorithm to cancel out the noise,which is fed through reference mic, from the noisy speech signal fed through primary mic.'' | ||
+ | |||
== ExDetect == | == ExDetect == | ||
Line 46: | Line 56: | ||
'''Abstract:''' ''real time face detection'' | '''Abstract:''' ''real time face detection'' | ||
+ | |||
+ | '''Video:''' [TBD] | ||
+ | |||
+ | |||
+ | == Finger Painting == | ||
+ | |||
+ | '''Authors:''' [mailto:cjfiguer@usc.edu Carlos Figueroa], [mailto:kumlu@usc.edu Deniz Kumlu], [mailto:maras@usc.edu Bahri Maras] | ||
+ | |||
+ | '''Abstract:''' ''With just your fingers you can draw an image and it will apear on an external monitor. By using different hand gestures you can control when to draw, erase or pause and simply track your movements. '' | ||
'''Video:''' [TBD] | '''Video:''' [TBD] | ||
Line 91: | Line 110: | ||
'''Abstract:''' ''Gaze Tracking. TBD'' | '''Abstract:''' ''Gaze Tracking. TBD'' | ||
+ | |||
+ | '''Video:''' [TBD] | ||
+ | |||
+ | |||
+ | == Trojan DJs == | ||
+ | |||
+ | '''Authors:''' [mailto:hsayani@usc.edu Hasan Sayani], [mailto:anasosal@usc.edu Pavankumar Vasu], [mailto:nparab@usc.edu Nikhil Parab] | ||
+ | |||
+ | '''Abstract:''' ''We have developed a gesture based DJ system, which will do audio processing tasks like equalization, stereo panning, cross fading, and pitch variation corresponding to the gestures detected by our system based on Kinect & DM6437 DSP.'' | ||
'''Video:''' [TBD] | '''Video:''' [TBD] |
Revision as of 10:14, 24 April 2012
Contents
AudioSense
Authors: Varun Nasery, Rohit Khaladkar, Gaurav Savadi
Abstract: A gesture recognition based audio system is implemented. Hand gestures are used to increase/decrease volume, change tracks, mute/unmute the track, play, pause and time seek.
Video: [TBD]
Digit(al) Calculator
Authors: Anil Sunil,Chetan Bhadrashette,Sarthak Sahu
Abstract: Automatic recognition of sign language is an important research problem for communication. Real-time image processing can provide much better experience than using a touch based system. Our project implements a basic calculator using gesture recognition methods. It can be very useful to use this calculator by using gestures for numbers and symbols such as addition, subtraction, multiplication and division, without pressing any buttons or typing anything.
Video: [TBD]
Duck Hunters
Authors: Madhur Ahuja, Pushkar Waghulde,Sahil Shrivastava
Abstract: The name says it all. We all miss the Nintendo games from the 90's, so in an attempt to refresh your memories we re-created the game but this one does not need a zapper gun, you can use a rod to project a point on the screen and we didn't change the rules either.
Video: [TBD]
EDGR
Authors: Aditya Tannu, Michael Minkler, Joshua Ramos
Abstract: EDGR - Embedded Depth Gesture Recognition. An 8-piece puzzle solved using hand gestures
Video: [TBD]
eMuffler
Authors: Kiran Nandanan, Rajesh Bisoi
Abstract: The objective of the project is to cancel the noise from noisy speech signal using adaptive noise cancellation technique. In this project we use normalized Least Mean Square(NLMS) algorithm to cancel out the noise,which is fed through reference mic, from the noisy speech signal fed through primary mic.
ExDetect
Authors: Yixin Shi, Qinwen Xu,Zhanpeng Yi
Abstract: The human visual system can understand different emotions on face very easily. However, it still needs effort to develop a real-time automated facial expression recognition system with great accuracy and short delay. Here, a real-time facial expression recognition prototype will be developed. The recognition system detect a single face from real-time video sequence and then attempt to recognize a set of emotional expressions including joy, surprise, disgust, anger and neutral. The system is supposed to be response to emotion variation without perceivable delay. First, skin color would be used to trace the face area in video steam and then LBP operator would be performed on divided small blocks of extracted face so that histogram could be computed and cascaded to be a whole feature set. Template matching would be used as classifying method and the outcome would be one of the five predefined emotions.
Video: [TBD]
FaceDetc
Authors: Li Cheng,Jinghan Xu,Xin Wei
Abstract: real time face detection
Video: [TBD]
Finger Painting
Authors: Carlos Figueroa, Deniz Kumlu, Bahri Maras
Abstract: With just your fingers you can draw an image and it will apear on an external monitor. By using different hand gestures you can control when to draw, erase or pause and simply track your movements.
Video: [TBD]
IRIS(Intelligent Recognition of Individual Signs)
Authors: Karthik Tadepalli, Ravishankar Ramesh,Sharannya Sudarsham
Abstract: The project involves detection of the American Sign Language. The 24 static alphabets of the English language are detected using various image processing techniques.
Video: [TBD]
Magic Face
Authors: Jinkai Wang,Ya Cao,Yao Lin
Abstract: Our project implements facial expression recognition in real time. Using DSK6416T and camera, human face, eyes and mouth can be tracked and bounding boxes will display on the board. Smile, surprise and neutral expressions can be recognized.
Video: [TBD]
Paper Piano
Authors: Hang Dong, Dana Morgenstern, Yu Rong
Abstract: The motivation of our project is to do a simple virtual piano by using a piece of paper as a keyboard and a camera to track the finger movements in order to select the key notes to play. The tracking of the fingers and the detection of the keys that are pressed will be achieved by using video processing techniques. The sound notes corresponding to the keys pressed will output through a loudspeaker. We may also display in the LCD of the board the result of the edge detection of the paper piano keys and fingertips.
Video: [TBD]
Realistic Remote Viewing
Authors: Tim Brochier, Lucas Vollherbst
Abstract: This system will give you the best seat in the house, in your house. By using motion tracking to detect performers movements on stage, this system provides the remote viewer with an accurate left/right audio panning to give a full, realistic stereo image of the streaming performance.
Video: [TBD]
Smart Group
Authors: Li Li, Hao Xu, Chiho Choi
Abstract: Gaze Tracking. TBD
Video: [TBD]
Trojan DJs
Authors: Hasan Sayani, Pavankumar Vasu, Nikhil Parab
Abstract: We have developed a gesture based DJ system, which will do audio processing tasks like equalization, stereo panning, cross fading, and pitch variation corresponding to the gestures detected by our system based on Kinect & DM6437 DSP.
Video: [TBD]
USC Rangers
Authors: Peiran Gong, Haowei Tseng, Bo Zhang
Abstract: We are doing a interactive game based on gesture recognition.
Video: [TBD]
Video Photoshop
Authors: XiaqingPan,Linlin Zhang,Chen Chen
Abstract: Video Photoshop is not only transfering the special function of the software of Photoshop to the DSP board and appling it to input frames. More than that, we are exploring the capability of the DAVINCI and trying to understand and implement video processing under a limited memory and computation power. Our goal is to make the special effects vivid and close to real time as much as possible.
Video: [TBD]
Visual Object
Authors: Shira Epstein, Will Chung
Abstract: Virtual Object inserts a virtual object into the video feed captured by the camera. First, an object of known shape and size is placed in a fixed location in the real scene. Using the POSIT algorithm, our code detects the object feature points and reaches an estimate of the relative camera pose in 3D space. Finally, the Virtual Object is drawn to the output video accordingly.
Video: [TBD]