We initially wanted to interview people from Tech Era, an organization that works with disabled people, as they would have more insight on our product. But unfortunately our ethics approval was denied because interviewing the visually impaired through Tech Era was considered too specific of an interview group. We have also reached out to the PR team and co-founders of WeWALK, Kursat Ceylan and Sadik Unlu, a company that develops smart canes, to see if we could interview them for advice on designing smart canes.
While we weren't able to interview visually impaired individuals or the employees of WeWALK, we have instead interviewed some of our fellow students to get some feedback on our project, sending an approved consent form to each interviewee before we conducted our interview. We acknowledge that our interviewees are not representative of the target audience of Opticane, but they were the only accessible individuals due to SDP research restrictions.
The main concerns of our project were how users would be confused about the buzzer placement and how users might get anxious if they are in a crowded place and all the buzzers go off. To combat this, we would like to potentially implement a dampener of the haptic feedback when it is triggered for long periods of time given more time.
All of our participants agreed that users of the standard white cane would encounter disruptions because the cane would not be able to detect some objects. One participant said, “It happens every day if they are alone. It is likely to happen, walking without a guide dog.” and another participant said, “Yes but not that frequently. Because they are accustomed to that life, but when they are in a new environment it definitely would be a problem.”
There were different opinions about what the minimum battery life should be ranging from 6 hours to 12 hours. Lastly, a common suggestion was to have a portable battery and a way to communicate when the battery needs charging which we would like to implement given more time.
All of our participants agreed that users of the standard white cane would encounter disruptions because the cane would not be able to detect some objects. One participant said, “It happens every day if they are alone. It is likely to happen, walking without a guide dog.” and another participant said, “Yes but not that frequently. Because they are accustomed to that life, but when they are in a new environment it definitely would be a problem.”
There were different opinions about what the minimum battery life should be ranging from 6 hours to 12 hours. Lastly, a common suggestion was to have a portable battery and a way to communicate when the battery needs charging which we would like to implement given more time.
The participants pointed out that our project solves the limitations of the white cane where it only gives information of the surrounding in short proximity and how the white cane gives information only in the dimension the cane was pointing to.
It is obvious that designing plays an important role in assistive technologies. A badly designed interface can have a very poor impact on a certain group of people. Thus, designers and developers have a responsibility not to marginalize atypical users. The designers and developers should be morally aware of the importance of investing time in research, testing the product, including disabled people in their tests and getting properly trained to do their work. Our team has been making significant effort in researching different ways to make an inclusive design for the cane. Our conclusion is creating customized handle designs to account for various grip methods used by individuals.
We have analyzed some ethical implications of AI for disabled people through the lens of fairness and justice. For example, in object recognition, "a recent paper demonstrates that systems using computer vision are developed largely in a white, western and middle-class context, failing to recognize common household objects that are more-often found in poor or non-western environments." These biases might risk harming already marginalized people from both wider communities and disabled communities. One other interesting thing to note is that "a computer vision system for accessibility, while rendering things more accessible, does so by shifting the center of analysis and judgment away from the user and towards thetechnology in hand." Even when computer vision is correctly implemented it legitimizes surveillance whose misuse can lead to a lot of problems. This also raises the question “how could technology to assist a blind person be kept from integration into policing technologies; who’s to say blind people aren’t among the users of policing technologies?”
There have also been concerns regarding the bias of facial recognition systems concerning ethnic minorities. Studies have shown that "the accuracies of face recognition systems used by US-based law enforcement are systematically lower for people labeled female, Black, or between the ages of 18—30 than for other demographic cohorts." There is "a lack of datasets labeled by ethnicity that limits the generalizability of research exploring the impact of ethnicity on gender classification accuracy." A gender classification accuracy report from the National Institute for Standards and Technology in the US showed that none of the 10 locations used in the study were in Africa or the Caribbean where there are significant Black populations. Using a facial recognition system could also represent a privacy issue as the system will be taking a photo of nearby individuals which would require their consent. Because of these concerns, we have decided to not move forward with computer vision and facial recognition for production.
Speech recognition systems sparked a debate regarding the privacy of the users. Jay Staley, Senior Policy Analyst for ACLU Speech, mentions the striking story concerning "a warrant from police in Arkansas seeking audio records of a man’s Amazon Echo has sparked an overdue conversation about the privacy implications of 'always-on' recording devices". [17] “Always-on” recording systems are present more and more in our life, but they need to be treated with more caution.
"Plenty of devices are programmed to keep their microphones on at all times but only record and transmit audio after hearing a trigger phrase—in the case of the Echo, for example, 'Alexa.'" [17] Any device that is to be activated by voice alone must work this way. However, there are a range of other systems. For example, "Samsung assured the public that its smart televisions (like others) only record and transmit audio after the user presses a button on its remote control." [17]
Nevertheless, there is always the risk that "the companies lie about their product (the case of Volkswagen), get hacked or government agencies might try to order companies to activate them as a surveillance device." [17]
Because of these ethical concerns, we've kept our device offline, only recording the voice when a button is pushed and then that recording is deleted afterwards.
The greatest concern of GPS tracking is the amount of information that can be deduced from the analysis of a person’s movements.
Moreover, GPS can be prone to errors. "Dense forest, tall buildings, cloud cover and moisture produce inaccuracies in readings but these are considered negligible when compared to the potential for inaccuracies in resultant information processing." [18] In that case, the important question is who is liable for the accuracy of the information. Very interesting is that the"software used to store tracking data makes it possible to edit data points in order to create false evidence". [18] For example, someone can be accused of a crime they did not commit. That’s why it is important to have clear regulations regarding the use of GPS.
That’s why , we have decided that if someone were able to access the raspberry pi, then they would only be able to access the saved marker locations and nothing else because no other data is stored.
For our final hardware prototype of the cane, we’ve endeavoured to find the smallest and lightest hardware components possible that still allow for the same level of functionality. We have just assumed that the lighter the cane the better without researching what are the optimal and also minimum acceptable cane weights. In a 2013 study, it was found that lighter canes made of carbon fibre (around 113g) did not strain the wrist and upper muscles as much as conventional canes weighing around 252g- “results indicated that the newly developed cane reduced the loads on muscles by approximately 50%”[30]. This tells us that for a comfortable final product the actual cane itself should be a newer carbon-fibre model. Additionally it gives us parameters to work with in terms of 252g is the acceptable cane weight so we should select parts for the cane handle to ensure the combined weight of the carbon fibre cane and the handle is less than 252g. Currently our TF Luna Lidar is 5g, the MG90s Servo Motor is 14g, Raspberry Pi Zero is 9g, the RS Pro battery is 21g and we’d estimate the weight of the casing at around 150g. Therefore we’d hope the final product’s weight is around 200g thereby being closer to the optimal weight of a whitecane.
One question asked in the last demo’s marking is whether haptic feedback is affected by the cane hitting the ground. This type of question would usually be resolved by in-person user testing however due to that being unavailable, looking to literature can help find an answer. In a 2010 paper by École Polytechnique Fédérale de Lausanne they designed a similar sort of ‘smart cane’ with haptic feedback, albeit the source of the haptic feedback only coming from one motor. [31] The design used an inertia wheel and mentioned a good force for haptic feedback was between 1000 and 4000 milli Newtons. The paper also mentioned that more powerful motors, larger than our coin buzzer motors, may generate too much force that could potentially damage other parts of the handle. Operating at 5V, our mini disc motors will generate a minimum force around 2500mN so even the weakest force can be distinguished from the general haptic of a cane hitting off the ground. [32]
According to the Benewake TF Luna's Datasheet, we get the highest accuracy when the object is between 0.2m and 8m away from the LiDAR. [33]