Skip to content
Ryan Rana edited this page Jul 12, 2023 · 1 revision

Welcome to the Helios wiki!

Helios: Wearable Haptic for the Assistance of Visually Impaired

Aryan Anand, Ryan Rana, Watchung Hills Regional High School 


ABSTRACT
Visually Impaired individuals face a multitude of challenges in their everyday life. One of the many challenges they face are sensing and securing objects around them, whether it be at their house or in an outside environment. We wonder: would it be possible for a wearable haptic to assist them in sensing, recognizing, and securing different objects in their surroundings? Creating such a device can assist the visually impaired in accomplishing their daily activities and boost their living standards. The objective of our project is to create a closed circuit, haptic device situated on the user’s wrist in which the user can speak into, and through AI (specifically, natural language processing and computer vision) the device should direct the user towards their desired object. The wearable haptic should be able to assist the user in detecting objects mixed in with other objects within a close distance. When the user speaks into our built-in microphone attached to a raspberry pi, the device recognizes the input and searches within our database of previously stored objects. Then the Raspberry Pi displays those signals to our haptic buzzers situated on the user’s wrist to localize and direct their hand to the object. These buzzers serve the simple command of either going left, right, or forward in relation to the desired object. By using many systems working with each other, our project named Helios – God of sight in Greek mythology – will boost the living experience of visually impaired individuals. 

Motivation and Approach

Helios is designed to assist the visually impaired in searching for and locating different objects in their everyday life. This is not the first project to implement haptic/computer vision based systems to aid the visually impaired. There are several that have been developed previously, but none are adequate to properly suit the visually impaired. Previous projects include CAMIO, Camera system to make physical objects accessible to blind users, which starts by having a camera situated on top of a desk scanning the objects below; paired with text to speech and real time audio feedback, the user can locate objects below (Shen et al., 2013). The problem with CamIO is that user’s will have to set up a camera in the location they want scanned. This proves to be a liability when users want to take their haptic outside or in other places, showing it is not very mobile. In 2004 researchers developed A haptic device wearable on a human arm; A Haptic Device on the human’s joints to help control the arm better and the functions of it (Yang et al., 2005). It consists of three sequentially connected modules, i.e., the 3-DOF wrist module, the 1-DOF elbow module, and the 3-DOF shoulder module, which are designed to adapt to the motions of the human arm skeletal joints at the wrist, elbow, and shoulder respectively (Yang et al., 2005). It can control arm movements better with the three joints but it is more complex and not very user-friendly because of the big frame of the device. Helios outperforms all of the previous works in user-friendliness, mobility, and speed. Helios can be very mobile since it can be taken anywhere unlike CamIO, is user-friendly because of our adjustable velcro straps, and can recognize and localize objects faster than both of the previously listed projects.

Helios is a closed-circuit solution that takes inputs from the user and analyzes the environment to give a location output. User’s speak into the microphone, which sends a signal to the camera to search for the object and then guide the user towards it through the buzzers. The buzzers give the user a more interactive feel while also serving as the visually impaired user’s direction. This is accomplished through having three buzzer’s attached to haptic controllers. The buzzers are positioned on the user’s wrist. One buzzer would serve as an indication to go left, one would serve as an indication to go forward, and one would be to go right. Helios essentially makes it possible for users to find anything they would need without needing to physically see it. 

Project Logistics and Organization

Our approach has many working parts and therefore to achieve our product, we have systematized the way each part fits together. First, we need to physically and virtually CAD design Helios. Next, is our build phase. During this phase we need to construct the prototype that will be used for testing. For our build phase to effectively be executed, we need to obtain the resources needed according to our design. After acquiring these resources, we need to put together the device according to our virtual design. The steps for putting together Helios are simple. First, lay out the velcro wrist brace so applying the components will be easier. Then on the side by using a GPIO extension, connect the raspberry pi to our mini breadboard. On the breadboard, wire the haptic controllers and buzzers: Red wire to 3V, Black wire to ground, yellow wire to SCL, blue wire to SDA, and black wire to controller negative and red wire to controller positive (Lady, n.d). We need to solder the buzzers to the haptic buzzers to ensure stability. After the breadboard wiring is done and connected to the raspberry pi, use velcro strips to attach this to the wrist brace. Lastly, label the buzzers as left, middle, and right so it can be easily accessible in the code. The haptic device will eventually be connected to a battery, but for testing we can just use outlet power. The diagram shown below perfectly illustrates our final design.

Next, is our programming phase where we will code all of the necessary parts of this project. The flowchart below perfectly sums up how our code will work. 


The code is written entirely in Python and uses a variety of different libraries and models to complete the program. For the microphone to convert speech to text the program uses the built in python library, speech_recognition, which sources the microphone input and uses NLP (Natural Language Processing) to recognize the text (Wijetunga, 2021). Operating the camera is done automatically because raspistill is a python library that takes picture/video off the Picam3. The Artificial Intelligence part is done with multiple models and libraries. They all operate under Tensorflow which is a end-end machine learning library which can implement many different models (Wijetunga, 2021).The first model used is Keras which augments the data and extracts the features of all the dataset images and groups the layers of the object so they AI can develop pattern recognition; it uses a multi-output model once labels have been saved for the images (Haifeng et al., 2019). This is a neural network that has multiple output variables and these variables are the classification itself but also is locality in the frame (Haifeng et al., 2019). The AI detection model is done with a CNN (Convolutional Neural Network) which extracts pixel data of the image and uses that information along with concepts learned during training to classify the image (Rivera et al., n.d). This is done after the bounding boxes have been placed from tensorflow automatically and the CNN is run in each bounding box and scales its confidence.

For the completion of our project, we have already purchased the materials needed for our wearable haptic prototype through Amazon and Adafruit. We would need assistance in making our prototype more aesthetically pleasing and comfortable on user’s. However, our main resources that we need from this program is mentorship. We would need a mentor to assist us in the final part of our project, the implementation of our code in our raspberry pi. Currently, we are running into problems implementing our aipy systems into the raspberry pi, including installing different data models into the pi. Having a mentor experienced in Artificial Intelligence who is knowledgeable about machine learning models and skilled in debugging would be beneficial to the completion of our project. If MIT Think can aid us in finishing our project, then we can turn our prototype into a finished product and market it to millions of visually impaired individuals across the globe. This would greatly benefit their everyday lives and is a shining example of using technology for good. 

Budget of our materials

Hardware Budget

Microcontroller (Raspberry Pi 4) | $160.00 | https://www.amazon.com/dp/B07TC2BK1X/ref=cm_sw_r_api_i_JR8Y6J1W5E7K2RHH25YY_0?th=1 Buzzers | $8.00 | https://docs.google.com/document/d/1k3RiPoEUMTPnt_A7AYZ_B8-P-NmOcTHDCY3NjENn9t0/edit Soldering Iron | $55.00 | https://www.amazon.com/SunFounder-Microphone-Raspberry-Recognition-Software/dp/B01KLRBHGM/ref=sr_1_3?adgrpid=1337006703469094&hvadid=83563133892796&hvbmt=bp&hvdev=c&hvlocphy=96512&hvnetw=o&hvqmt=p&hvtargid=kwd-83563263919287%3Aloc-190&hydadcr=18031_13443530&keywords=raspberry%2Bmicrophone&qid=1659638900&sr=8-3&th=1 Wrist Brace | $17.50 | https://www.amazon.com/dp/B01DGZFSNE/?tag=thewire06-20&linkCode=xm2&ascsubtag=AwEAAAAAAAAAAhF6&th=1 Pin cable | $0.95 | https://www.amazon.com/Carpal-Tunnel-Wrist-Brace-Support/dp/B07JWH8C87/ref=asc_df_B07JJ47VZS/?tag=&linkCode=df0&hvadid=309722106657&hvpos=&hvnetw=g&hvrand=4635499784034306484&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9003510&hvtargid=pla-607176280210&ref=&adgrpid=60862488719&th=1 Half breadboards | $5.00 | https://www.adafruit.com/product/4539 Female to male | $1.95 | adafruit.com/product/1953?gclid=CjwKCAjw2rmWBhB4EiwAiJ0mtYTgNgapJRQKRDOXJTZYZCZyrpNXYO7fUlzLOW1aVl-8sX52V2Z2rBoCKgwQAvD_BwE Male to male | $3.95 | https://www.adafruit.com/product/1953?gclid=CjwKCAjw2rmWBhB4EiwAiJ0mtYTgNgapJRQKRDOXJTZYZCZyrpNXYO7fUlzLOW1aVl-8sX52V2Z2rBoCKgwQAvD_BwE Camera (PiCam) | $39.95 | https://www.adafruit.com/product/5247 Total | $292.30 |  


The goals and milestones for Helios are to successfully complete all five of our proposed tests and have a fully functioning prototype that can be tested on visually impaired individuals. The tests are done so we can ensure Helios is capable enough to be used in the real-world. It is important to note that all object recognition tests are planned to take place on a flat surface, where all objects in view are on the same plane and level as each other.

Test 1: Helios detecting one object

The purpose of this test is so Helios can fully use its computer vision to detect one object and guide the user towards that object. This test is supposed to serve as a baseline test, so we can see if the basics are working. 

Test 2: Helios detecting one object next to two different objects

The purpose of this second test is to evaluate Helios’s recognition skills. Compared to the baseline test, Helios will now have to take in three inputs from the computer vision, and match the object to the input from the microphone. For example, this could be detecting an apple next to an orange and blueberry. 

Test 3: Helios detecting one object compared to two similar objects

The purpose of this third test is to further evaluate Helios’s recognition skills. Compared to the second test, the objects will all be very similar to each other. For example, this could be Helios detecting a green apple next to a brown apple and red apple. Helios will be even more tested on its ability to recognize objects. 

Test 4: Helios detecting one object compared to 10+ objects

The purpose of this test is to evaluate if Helios can handle many different distractions. For example, putting one apple compared to many different fruits and even some similar ones. This is to ensure Helios can fully handle detecting any object in our database, no matter how or where it is positioned.

Test 5: Helios used in a supermarket

This last test is the most real-world scenario like test. The test will be taken on a visually-impaired individual and they will use Helios to assist them in getting and purchasing items on their grocery list. For example, they can use Helios to detect a specific type of pasta sauce, or a specific type of ice cream. 

Overall, if Helios accomplishes these five tests, then it shows that Helios can assist visually impaired individuals in any task that involves detecting and securing objects. 

Helios passing these tests without error is ideal, but there are bound to be some errors along the way. 

The following addresses possible errors and solutions:

  1. Helios not recognizing an image's existence:  Helios not recognizing an image relates to the feature extraction and/or the Keras Data Augmentation; this will need to be adjusted.

  2. Helios not localizing an image: This relates to the confidence levels of bounding boxes, this can be visualized with confidence mapping. Therefore the confidence will need to be higher and this is going to be solved in the recognition aspect of the code. 

  3. Helios not being able to classify an image: The object classification model will need to be corrected; this is the CNN feature layer extraction which will need to be trained and tested further.

Timeline

  1. 2/20/23: Fix issue with implementing keras and tensorflow models onto raspberry pi.

  2. 3/1/23: Finish Test 1 of Helios tests and make sure that all parts of the computer vision code is working with no errors.

  3. 3/20/23: Finish Test 2 of Helios tests.

  4. 4/10/23: Finish Test 3 of Helios tests.

  5. 5/1/23: Finish Test 4 of Helios tests.

  6. 6/1/23: Complete all of the five Helios tests and make sure all aspects of Helios are functioning properly.

Current Progress:

As stated above, we have already completed designing and constructing the prototype for Helios. We have also written the code for Helios but have yet to test it. Our problem is that we are running into errors downloading different machine learning models onto our raspberry pi. We also are having trouble creating annotations for the different objects in our code. 

Personal Interest

Our personal interest for this project is through one of our member’s grandmother. His grandmother is visually impaired and has trouble doing basic tasks. After learning about his experiences with his grandmother, this motivated us to create a device to assist the visually impaired in some way. We choose haptic devices because it is the best way to assist visually impaired individuals. Our background in engineering and CS qualifies us to accomplish this project. Our engineering/design lead has completed programs at NYU and Boston University for robotics and has written a research paper about haptic technology before. He also has experience in CAD designing with applications like Onshape and Fusion 360. Our software lead is skilled in Python, Front-end Development, HTML, Web Dev, Research, Scientific Writing, JS, PHP, Jupyter, Pandas, Raspberry Pi, NumPy, Back End Development, and Machine Learning Models (Unsupervised and Supervised). He is a published researcher and worked for multiple internships and startups and developed a multitude of active, in-use applications.

References

Ada, Lady. (n.d.). Python & CircuitPython | Adafruit DRV2605L Haptic Controller Breakout | Adafruit Learning System. Adafruit Learning System; Adafruit. Retrieved December 31, 2022, from https://learn.adafruit.com/adafruit-drv2605-haptic-controller-breakout/python-circuitpython

Guilin Yang, Hui Leong Ho, Weihai Chen, Wei Lin, Song Huat Yeo, & Kurbanhusen, M. S. (2005). A haptic device wearable on a human arm. IEEE Conference on Robotics, Automation and Mechatronics, 2004. https://doi.org/10.1109/ramech.2004.1438924

Haifeng, J., Qingquan, S., & Xia, H. (2019, June 25). Auto-Keras: An Efficient Neural Architecture Search System. ACM Digital Library; ACM. https://dl.acm.org/doi/abs/10.1145/3292500.3330648

Rivera, A. G., Sharp, T., Kohli, P., Fitzgibbon, A., Glocker, B., Shotton, J., & Izadi, S. (n.d.). Multi-task CNN model for attribute prediction. Ieee. https://arxiv.org/abs/1601.00400

Shen, H., Edwards, O., Miele, J., & Coughlan, J. (n.d.). CamIO: a 3D Computer Vision System Enabling Audio/Haptic Interaction with Physical Objects by Blind Users. Camio-Assets2013.Pdf; The Smith-Kettlewell Eye Research Institute. Retrieved December 31, 2022, from https://www.ski.org/sites/default/files/publications/camio-assets2013.pdf

Wijetunga, C. (2021, July 11). Building an object detector in TensorFlow using bounding-box ... Nerd for Tech. https://medium.com/nerd-for-tech/building-an-object-detector-in-tensorflow-using-bounding-box-regression-2bc13992973f









Helios: Wearable Haptic for the Assistance of Visually Impaired Aryan Anand, Ryan Rana, Watchung Hills Regional High School

ABSTRACT Visually Impaired individuals face a multitude of challenges in their everyday life. One of the many challenges they face are sensing and securing objects around them, whether it be at their house or in an outside environment. We wonder: would it be possible for a wearable haptic to assist them in sensing, recognizing, and securing different objects in their surroundings? Creating such a device can assist the visually impaired in accomplishing their daily activities and boost their living standards. The objective of our project is to create a closed circuit, haptic device situated on the user’s wrist in which the user can speak into, and through AI (specifically, natural language processing and computer vision) the device should direct the user towards their desired object. The wearable haptic should be able to assist the user in detecting objects mixed in with other objects within a close distance. When the user speaks into our built-in microphone attached to a raspberry pi, the device recognizes the input and searches within our database of previously stored objects. Then the Raspberry Pi displays those signals to our haptic buzzers situated on the user’s wrist to localize and direct their hand to the object. These buzzers serve the simple command of either going left, right, or forward in relation to the desired object. By using many systems working with each other, our project named Helios – God of sight in Greek mythology – will boost the living experience of visually impaired individuals. Motivation and Approach Helios is designed to assist the visually impaired in searching for and locating different objects in their everyday life. This is not the first project to implement haptic/computer vision based systems to aid the visually impaired. There are several that have been developed previously, but none are adequate to properly suit the visually impaired. Previous projects include CAMIO, Camera system to make physical objects accessible to blind users, which starts by having a camera situated on top of a desk scanning the objects below; paired with text to speech and real time audio feedback, the user can locate objects below (Shen et al., 2013). The problem with CamIO is that user’s will have to set up a camera in the location they want scanned. This proves to be a liability when users want to take their haptic outside or in other places, showing it is not very mobile. In 2004 researchers developed A haptic device wearable on a human arm; A Haptic Device on the human’s joints to help control the arm better and the functions of it (Yang et al., 2005). It consists of three sequentially connected modules, i.e., the 3-DOF wrist module, the 1-DOF elbow module, and the 3-DOF shoulder module, which are designed to adapt to the motions of the human arm skeletal joints at the wrist, elbow, and shoulder respectively (Yang et al., 2005). It can control arm movements better with the three joints but it is more complex and not very user-friendly because of the big frame of the device. Helios outperforms all of the previous works in user-friendliness, mobility, and speed. Helios can be very mobile since it can be taken anywhere unlike CamIO, is user-friendly because of our adjustable velcro straps, and can recognize and localize objects faster than both of the previously listed projects. Helios is a closed-circuit solution that takes inputs from the user and analyzes the environment to give a location output. User’s speak into the microphone, which sends a signal to the camera to search for the object and then guide the user towards it through the buzzers. The buzzers give the user a more interactive feel while also serving as the visually impaired user’s direction. This is accomplished through having three buzzer’s attached to haptic controllers. The buzzers are positioned on the user’s wrist. One buzzer would serve as an indication to go left, one would serve as an indication to go forward, and one would be to go right. Helios essentially makes it possible for users to find anything they would need without needing to physically see it. Project Logistics and Organization Our approach has many working parts and therefore to achieve our product, we have systematized the way each part fits together. First, we need to physically and virtually CAD design Helios. Next, is our build phase. During this phase we need to construct the prototype that will be used for testing. For our build phase to effectively be executed, we need to obtain the resources needed according to our design. After acquiring these resources, we need to put together the device according to our virtual design. The steps for putting together Helios are simple. First, lay out the velcro wrist brace so applying the components will be easier. Then on the side by using a GPIO extension, connect the raspberry pi to our mini breadboard. On the breadboard, wire the haptic controllers and buzzers: Red wire to 3V, Black wire to ground, yellow wire to SCL, blue wire to SDA, and black wire to controller negative and red wire to controller positive (Lady, n.d). We need to solder the buzzers to the haptic buzzers to ensure stability. After the breadboard wiring is done and connected to the raspberry pi, use velcro strips to attach this to the wrist brace. Lastly, label the buzzers as left, middle, and right so it can be easily accessible in the code. The haptic device will eventually be connected to a battery, but for testing we can just use outlet power. The diagram shown below perfectly illustrates our final design.

Next, is our programming phase where we will code all of the necessary parts of this project. The flowchart below perfectly sums up how our code will work.

The code is written entirely in Python and uses a variety of different libraries and models to complete the program. For the microphone to convert speech to text the program uses the built in python library, speech_recognition, which sources the microphone input and uses NLP (Natural Language Processing) to recognize the text (Wijetunga, 2021). Operating the camera is done automatically because raspistill is a python library that takes picture/video off the Picam3. The Artificial Intelligence part is done with multiple models and libraries. They all operate under Tensorflow which is a end-end machine learning library which can implement many different models (Wijetunga, 2021).The first model used is Keras which augments the data and extracts the features of all the dataset images and groups the layers of the object so they AI can develop pattern recognition; it uses a multi-output model once labels have been saved for the images (Haifeng et al., 2019). This is a neural network that has multiple output variables and these variables are the classification itself but also is locality in the frame (Haifeng et al., 2019). The AI detection model is done with a CNN (Convolutional Neural Network) which extracts pixel data of the image and uses that information along with concepts learned during training to classify the image (Rivera et al., n.d). This is done after the bounding boxes have been placed from tensorflow automatically and the CNN is run in each bounding box and scales its confidence. For the completion of our project, we have already purchased the materials needed for our wearable haptic prototype through Amazon and Adafruit. We would need assistance in making our prototype more aesthetically pleasing and comfortable on user’s. However, our main resources that we need from this program is mentorship. We would need a mentor to assist us in the final part of our project, the implementation of our code in our raspberry pi. Currently, we are running into problems implementing our aipy systems into the raspberry pi, including installing different data models into the pi. Having a mentor experienced in Artificial Intelligence who is knowledgeable about machine learning models and skilled in debugging would be beneficial to the completion of our project. If MIT Think can aid us in finishing our project, then we can turn our prototype into a finished product and market it to millions of visually impaired individuals across the globe. This would greatly benefit their everyday lives and is a shining example of using technology for good. Budget of our materials Hardware Budget Microcontroller (Raspberry Pi 4) $160.00 https://www.amazon.com/dp/B07TC2BK1X/ref=cm_sw_r_api_i_JR8Y6J1W5E7K2RHH25YY_0?th=1 Buzzers $8.00 https://docs.google.com/document/d/1k3RiPoEUMTPnt_A7AYZ_B8-P-NmOcTHDCY3NjENn9t0/edit Soldering Iron $55.00 https://www.amazon.com/SunFounder-Microphone-Raspberry-Recognition-Software/dp/B01KLRBHGM/ref=sr_1_3?adgrpid=1337006703469094&hvadid=83563133892796&hvbmt=bp&hvdev=c&hvlocphy=96512&hvnetw=o&hvqmt=p&hvtargid=kwd-83563263919287%3Aloc-190&hydadcr=18031_13443530&keywords=raspberry%2Bmicrophone&qid=1659638900&sr=8-3&th=1 Wrist Brace $17.50 https://www.amazon.com/dp/B01DGZFSNE/?tag=thewire06-20&linkCode=xm2&ascsubtag=AwEAAAAAAAAAAhF6&th=1 Pin cable $0.95 https://www.amazon.com/Carpal-Tunnel-Wrist-Brace-Support/dp/B07JWH8C87/ref=asc_df_B07JJ47VZS/?tag=&linkCode=df0&hvadid=309722106657&hvpos=&hvnetw=g&hvrand=4635499784034306484&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9003510&hvtargid=pla-607176280210&ref=&adgrpid=60862488719&th=1 Half breadboards $5.00 https://www.adafruit.com/product/4539 Female to male $1.95 adafruit.com/product/1953?gclid=CjwKCAjw2rmWBhB4EiwAiJ0mtYTgNgapJRQKRDOXJTZYZCZyrpNXYO7fUlzLOW1aVl-8sX52V2Z2rBoCKgwQAvD_BwE Male to male $3.95 https://www.adafruit.com/product/1953?gclid=CjwKCAjw2rmWBhB4EiwAiJ0mtYTgNgapJRQKRDOXJTZYZCZyrpNXYO7fUlzLOW1aVl-8sX52V2Z2rBoCKgwQAvD_BwE Camera (PiCam) $39.95 https://www.adafruit.com/product/5247 Total $292.30

The goals and milestones for Helios are to successfully complete all five of our proposed tests and have a fully functioning prototype that can be tested on visually impaired individuals. The tests are done so we can ensure Helios is capable enough to be used in the real-world. It is important to note that all object recognition tests are planned to take place on a flat surface, where all objects in view are on the same plane and level as each other. Test 1: Helios detecting one object The purpose of this test is so Helios can fully use its computer vision to detect one object and guide the user towards that object. This test is supposed to serve as a baseline test, so we can see if the basics are working. Test 2: Helios detecting one object next to two different objects The purpose of this second test is to evaluate Helios’s recognition skills. Compared to the baseline test, Helios will now have to take in three inputs from the computer vision, and match the object to the input from the microphone. For example, this could be detecting an apple next to an orange and blueberry. Test 3: Helios detecting one object compared to two similar objects The purpose of this third test is to further evaluate Helios’s recognition skills. Compared to the second test, the objects will all be very similar to each other. For example, this could be Helios detecting a green apple next to a brown apple and red apple. Helios will be even more tested on its ability to recognize objects. Test 4: Helios detecting one object compared to 10+ objects The purpose of this test is to evaluate if Helios can handle many different distractions. For example, putting one apple compared to many different fruits and even some similar ones. This is to ensure Helios can fully handle detecting any object in our database, no matter how or where it is positioned. Test 5: Helios used in a supermarket This last test is the most real-world scenario like test. The test will be taken on a visually-impaired individual and they will use Helios to assist them in getting and purchasing items on their grocery list. For example, they can use Helios to detect a specific type of pasta sauce, or a specific type of ice cream. Overall, if Helios accomplishes these five tests, then it shows that Helios can assist visually impaired individuals in any task that involves detecting and securing objects. Helios passing these tests without error is ideal, but there are bound to be some errors along the way. The following addresses possible errors and solutions: Helios not recognizing an image's existence: Helios not recognizing an image relates to the feature extraction and/or the Keras Data Augmentation; this will need to be adjusted. Helios not localizing an image: This relates to the confidence levels of bounding boxes, this can be visualized with confidence mapping. Therefore the confidence will need to be higher and this is going to be solved in the recognition aspect of the code. Helios not being able to classify an image: The object classification model will need to be corrected; this is the CNN feature layer extraction which will need to be trained and tested further. Timeline 2/20/23: Fix issue with implementing keras and tensorflow models onto raspberry pi. 3/1/23: Finish Test 1 of Helios tests and make sure that all parts of the computer vision code is working with no errors. 3/20/23: Finish Test 2 of Helios tests. 4/10/23: Finish Test 3 of Helios tests. 5/1/23: Finish Test 4 of Helios tests. 6/1/23: Complete all of the five Helios tests and make sure all aspects of Helios are functioning properly. Current Progress: As stated above, we have already completed designing and constructing the prototype for Helios. We have also written the code for Helios but have yet to test it. Our problem is that we are running into errors downloading different machine learning models onto our raspberry pi. We also are having trouble creating annotations for the different objects in our code. Personal Interest Our personal interest for this project is through one of our member’s grandmother. His grandmother is visually impaired and has trouble doing basic tasks. After learning about his experiences with his grandmother, this motivated us to create a device to assist the visually impaired in some way. We choose haptic devices because it is the best way to assist visually impaired individuals. Our background in engineering and CS qualifies us to accomplish this project. Our engineering/design lead has completed programs at NYU and Boston University for robotics and has written a research paper about haptic technology before. He also has experience in CAD designing with applications like Onshape and Fusion 360. Our software lead is skilled in Python, Front-end Development, HTML, Web Dev, Research, Scientific Writing, JS, PHP, Jupyter, Pandas, Raspberry Pi, NumPy, Back End Development, and Machine Learning Models (Unsupervised and Supervised). He is a published researcher and worked for multiple internships and startups and developed a multitude of active, in-use applications. References Ada, Lady. (n.d.). Python & CircuitPython | Adafruit DRV2605L Haptic Controller Breakout | Adafruit Learning System. Adafruit Learning System; Adafruit. Retrieved December 31, 2022, from https://learn.adafruit.com/adafruit-drv2605-haptic-controller-breakout/python-circuitpython Guilin Yang, Hui Leong Ho, Weihai Chen, Wei Lin, Song Huat Yeo, & Kurbanhusen, M. S. (2005). A haptic device wearable on a human arm. IEEE Conference on Robotics, Automation and Mechatronics, 2004. https://doi.org/10.1109/ramech.2004.1438924 Haifeng, J., Qingquan, S., & Xia, H. (2019, June 25). Auto-Keras: An Efficient Neural Architecture Search System. ACM Digital Library; ACM. https://dl.acm.org/doi/abs/10.1145/3292500.3330648 Rivera, A. G., Sharp, T., Kohli, P., Fitzgibbon, A., Glocker, B., Shotton, J., & Izadi, S. (n.d.). Multi-task CNN model for attribute prediction. Ieee. https://arxiv.org/abs/1601.00400 Shen, H., Edwards, O., Miele, J., & Coughlan, J. (n.d.). CamIO: a 3D Computer Vision System Enabling Audio/Haptic Interaction with Physical Objects by Blind Users. Camio-Assets2013.Pdf; The Smith-Kettlewell Eye Research Institute. Retrieved December 31, 2022, from https://www.ski.org/sites/default/files/publications/camio-assets2013.pdf Wijetunga, C. (2021, July 11). Building an object detector in TensorFlow using bounding-box ... Nerd for Tech. https://medium.com/nerd-for-tech/building-an-object-detector-in-tensorflow-using-bounding-box-regression-2bc13992973f

Clone this wiki locally