Plan
1 - Introduction
With the advancement of technology and the development of new tools, the possibility for people with visual impairments to know what is on the screen (for a computer or a smartphone), seems more and more reachable (Shao et al., 2017). The accessibility of these devices now allows them to use screens independently without asking for help from another person. In this way, blind people can access digital environment for their personal use but also in the professional field. In the case of a computer, they only use the keyboard keys to move from field to field on the screen. Therefore, it is rather rare for a blind person to use the mouse. Also, to find out what is written on the screen, they can use screen readers such as JAWS or NVDA (NonVisual Desktop Access) which read the sentence by text-to-speech. In recent years, these screen readers have been improved a lot but unfortunately, they cannot always explain the images and shapes that appear on the screen. For this reason, research has focused on how to provide tactile and physical information to visually impaired and blind people through sensory supplementation devices (Lenay, et al., 2003 ; Meijer, 1992 ; Vidal- Verdú & Hafez, 2007 ; Crossan & Brewster, 2008). Blind people have mental images that have common properties with those of sighted people, but, for blind people, knowledge should be constructed through relations between movements and tactile, auditory and olfactory stimuli (Cornoldi & Vecchi, 2000 ; Hatwell, 2003). In a way, the visual information appearing on the screen could be replaced by tactile and auditory stimuli.
Based on previous work and theories, this article will discuss the manner in which we proceeded to offer a user-centered design approach. The UAP Project (Universal Access Platform) aims at creating a tactile sensory supplementation system called Tactos to allow the exploration of a computer screen by visually impaired and blind people (Lenay et al., 2003 ; Sribunruangrit et al., 2002 ; Ziat et al., 2007). Tactos enables the touch of pixel colours of the computer screen using braille cells on a small device. The braille cells react to the colours by moving the pins up or down. In other words, it allows blind people to have access to visual information that they could not have had in the form of touching. This article presents the continuing of the project with a new version of the tactile sensory supplementation device.
Given the fact that the end users of the Tactos system are visually impaired, they do not have the same mental representations to move around as sighted people. It is not appropriate to give them the same instructions of movement as to a sighted person. That is why it was important to take into account their ways of thinking and moving. We established an approach to better adapt the tool. From users’ feedback, the development team was able to define the main design and corrections axes. Thanks to the users’ proposals, ergonomic problems could be identified. The same is true for the recommendations. The relevant elements found were discussed with users and then forwarded to the development team to improve the system.
After introducing the material, we will present the two types of content we created to perfect the design of Tactos : a tutorial and exploration maps.
2 - Theoretical background
The sense of sight seems essential to use a computer and navigate on the screen. However, vision does not seem to be essential for understanding spatial concepts (Aleman et al., 2001 ; Kaski, 2002 ; Thinus-Blanc & Gaunet, 1997 ; Vanlierde & Wanet-Defalque, 2004 ; Tinti et al, 2006). Blind people, like sighted people, can create, imagine and manipulate spatial representations and images (Kerr, 1983). If these two populations have the same imaging capacity, then blind people should be able to understand spatial information on a computer or a smartphone screen. This sense and the lack of visual experience can be compensated in part by a haptic perception (Heller, 2000). Blind people have a habit of touching objects with both hands and multiple fingers to replace visual information taking. For example, by touching a glass on a table, they understand what the object is, where a sighted person would only have to look at the object. It is stipulated that for spatial encoding, spatial organization differs radically in haptics and vision. Haptic space is governed by the individual’s body while vision depends on the spatial coordinates of external elements (Hatwell, 1960 ; Révész, 1950 ; Warren, 1977). With the development of technological tools and computer systems, this essential visual perception can be replaced by other techniques (Toennies et al., 2011). Tactos is itself a perceptual supplementation device which aims at replacing the lack of visual information.
Exploring a relief printed map (e.g., on a thermoformed paper) of an unfamiliar environment can provide equivalent or even better information about the spatial arrangement of an environment or place to a blind person than directly exploring the environment in real life (Bentzen, 1972 ; Blades et al., 1999 ; Espinosa et al., 1998 ; Ungar et al., 2000). It is therefore not necessary to know a place to explore a map.
The use of thermoformed paper for a map still has its limits : a small or very local exploration area, printing cost, etc… In a way, with the Tactos system, these limits can be mitigated. By switching from a paper format to a digital one, a greater diversity of areas and maps to explore would then be available. More so, economically, this would avoid the user having to print each time he wants to explore a place. Note that on a paper map, the person can use multiple fingers at the same time to explore. On the contrary, as we will see, with Tactos, there is only one point of action to access the information.
It was also noted that the effectiveness of using touch maps depends on the user’s hand movement strategies (Blades et al., 1999 ; Berla & Butterfield, 1977). Possibly, inter-individual differences could be seen in these hand movements. Despite the tool, a real navigation or screen exploration learning should help reduce this gap between users. The creation of a tutorial seems necessary so that all users are equal in the task of exploring the screen.
Haptic tracking strategies (using multiple fingers, both hands, or a point of reference) may not be instinctive. Accordingly, these effective haptic strategies may be learned either after a substantial period of experimentation or through explicit learning. A first introduction through a tutorial would allow all users to have the same knowledge and awareness of the tool’s action. Like sighted people, blind people may well have difficulties in their haptic tracking strategies. In a study, researchers asked blind adolescents to draw a line by placing their index finger of their non-dominant hand as a landmark (Berla & Butterfield, 1977). Some blind adolescents were not as good as other blind adolescents at tracking and identifying a country’s borders on a map (Berlá et al., 1976). In fact, they either stop too early or too late.
Therefore, placing this index as a reference allowed them to know that the entire shape had been explored. This landmark also made it possible to know that if the subjects continued to explore and started from this point, they were following an already explored shape. This led to observed better performance in trained blind adolescents compared to those who were not.
2.1 - The Tactos system
2.1.1 - The MIT5
The Tactos box is a small rectangular device comprising two piezoelectric braille cells and two buttons (Lenay et al., 2003). For more details, these two braille cells are composed of two columns of four pins each (making a total of 16 pins). The size of the tool is close to that of a smartphone. During our experiments, we tested and used two models of the Tactos box (fig 1.) : “Module d’Interaction Tactile” (MIT4 and MIT5).
The only differences between these two versions are the size of the box (slightly smaller for the MIT5) and the way to connect it to a computer. Indeed, the MIT4 can only be connected by Bluetooth while the MIT5 can also be connected by USB cable. The system is composed of one effector which gives control to users on the position of a receptor field. The pins of the braille cells will rise or lower as the receptor field (the mouse cursor) moves through a pixel colour according to the defined configuration (fig 2).
In this case, this system makes it possible to follow and explore a shape on the screen. We noticed that the thinner the row of pins is, the better is the shape’s comprehension. Thus, a single row of pins is more tactilely understandable and easy to follow than many risen pins. In previous studies, subjects were able to recognize simple but also complex shapes with the Tactos system through active explorations of shapes by perceptual trajectories in the space of the screen (Lenay et al., 2003 ; Gapenne et al., 2003).
The MIT5, which offers haptic perception, is reinforced by voice synthesis on the computer. Not only does this speech synthesis allow sentences to be pronounced according to a colour, but it is also capable of making sounds. By associating the haptic and the auditory feedback of a map, we reinforced the sensation of crossings, cross-roads or intersections in our maps. This caught the attention of the user who then took the time to explore the place thoroughly.
2.1.2 - “Tactos_GroundTask”
Tactos is a system based on pixels’ recognition under the mouse cursor. We used a “Tactos_GroundTask” module whose principle is to convert colours to a rise of braille cells on the MIT5.
Given the fact that we have 16 braille cells on the MIT5, we decided to address 16 pixels at the time using the mouse cursor. Therefore, a braille cell reacts to one pixel. So, the mouse cursor is transformed into a matrix of 16 receptor fields (fig 3).
The user browses the screen with the cursor (either with the mouse or with his finger on the touchpad) with his right hand/index (if he is right-handed) while his left index finger is positioned on the Braille cells of the MIT5. As the user browses the computer screen, the system transforms the pixels under the receptor field into tactile stimulation on the braille cells of the MIT5. With this configuration, the user is indeed able to feel a very small and local part of a shape with his left index finger. There is only one point of action, only one matrix of 16 receptor fields to explore a computer screen. This may be a limit to understand the spatial arrangement of shapes and objects. Would it be possible for the user to explore a map fully and efficiently if only one receptor was available ? It has been shown that even with a small sensitive surface coupled with an active exploration, it is possible to explore the screen to recognize shapes and to infer information (Lenay et al., 2003 ; Summers & Chanter, 2002 ; Allerkamp et al., 2007). To understand the complete shape, the user must move his receptor field through the screen.
2.1.3 - “Tactos_Config” software
In order for the MIT5 to react to the pixel colours under the mouse cursor, we developed the “Tactos_Config” software (fig 4). In this software, we associated a hexadecimal colour to a configuration. For example, we selected the colour “093FF7” and we decided whether we wanted to rise the braille cells (or not), what the speech synthesis said and how to get the sound information (immediately by hovering over the pixel, by clicking in the mouse cursor, on the Tactos box button, etc…). Finally, we could select a sound effect or write a name, a sentence in the corresponding fields.
2.2 - Approach and settings
Since the evaluation of a momentary user experience is in most cases not very reliable for predicting user experience in real life, we opted for a longitudinal study. We conducted ten sessions with two users over the span of several months. Each session, we tested new elements while correcting the problems detected during the previous one. Therefore, through sessions, users were able to develop an ease or a better understanding of the system which helped us to better improve Tactos from their feedback.
The primary interest and benefit of this approach is to allow us to understand user experience and their relationship with the system, both evolving over time from early learning to integration into everyday life.
Moreover, given that the user experience is highly dependent on the user’s internal state of mind (e.g., predisposition, expectations, needs, motivations, mood, etc…), the system characteristics (e.g., complexity, purpose, usability, functionality, etc…) and context of use (environment) (Hasseznzahl & Tractinsky, 2006), field studies provide a much more realistic context within which to obtain reliable user experience data (Vermeeren et al., 2010). User experience therefore results from the interaction of a set of factors. That is why we asked one of the users to conduct sessions at her home to identify gestures, thoughts and the use of Tactos in a real-life situation.
One of the main objectives of the study is to make the user autonomous and independent when using a computer. Visually impaired people must be able to use Tactos without the help of another person, especially when connecting and starting the system. Indeed, these people deplore the fact that many products adapted for them require the assistance of another person.
2.2.1 - Design steps
Our user-centered approach consisted of several steps. First, we started by creating content (e.g., exploration maps). After that, we configured Tactos to match our content. We matched the colours used in the maps with the colours configured in the “Tactos_Config” software. When everything was set up for the Tactos system (MIT5 Tactos_Config software and Tactos-GroundTask), we then proposed the content to our users. These people tested and commented on both the Tactos system and the content.
After noting the claims, we analysed them and defined the new design guidelines to be sent back to the development team. As a result, the problems encountered by the user with the Tactos system (e.g., a lack of precision), could be corrected. In parallel, exploration maps could also be reworked (e.g., by proposing finer lines for a better haptic perception of the path to follow). The benefit is therefore double after each experiment. After these changes and new additions, we repeated experiment sessions with users to observe new gestures, behaviours and user impressions. We were repeating this cycle over and over again.
In the creation and development of tools (e.g., technological, digital, mechanical, etc…), different methods are implemented to perfect and make the system as ergonomic and accepted as possible (Boy, 2017 ; Eason, 1995). This collaborative work between the development team and the end users makes it possible to focus on certain elements that would not have been identified or detected if the engineers had created a tool on their own (Sanders & Stappers (2008). Among the methods used to identify these relevant points, researchers use questionnaires, interviews, observations, analysis in words, gestures, etc… Therefore, computer engineers, researchers in cognitive science and visually impaired or blind users have collaborated in this design approach in order to integrate ergonomics and human factors.
2.2.2 - The UAP Project Team
In the research team, we had a researcher in cognitive sciences and philosophy, two IT engineers and a cognitive ergonomist.
The researcher in cognitive sciences and philosophy was the leader of the project. He organized, directed and supervised the project team by giving the axis and direction of the study.
Among the two IT engineers, one of the two has developed and coded the software and participated in the improvement of the Tactos system since its creation. Through the years, he and the researcher in cognitive sciences and philosophy have conducted several studies on the Tactos system (Lenay et al., 2003 ; Ziat et al., 2007 ; Gapenne et al., 2003 ; Tixier et al., 2013).
The second IT engineer was specialized in Linux language and coding. He developed and improved Tactos software and modules on Linux (such as the “Tactos_Config” software).
A cognitive ergonomist created and submitted maps to the users. He interacted with the visually impaired volunteers. He then transmitted the feedback to the development team in order to enhance the Tactos system.
2.2.3 - Participants
For our sessions, we had the participation of two middle-aged women who were born blind. We counted on them for seven months. Such a follow-up also made it possible to observe a learning process. In total, we ended up with 15 to 20 hours of exchanges, discussions and experiments with them. These women knew the Tactos system because they had participated to previous studies years ago (Tixier et al., 2013). Nevertheless, the Tactos box changed a lot (size, height, gripping, etc…fig 5).
In its first version of the MIT, the device was larger, and the grip was different. It was possible to hold the case upright. The box had one series of braille cells on each side so that two people could use the MIT at the same time. This idea was rejected when the new cases were manufactured.
Now, the recent versions (Tactos MIT4 and MIT5) can simply and only be used by a single person by placing their hand flat, putting their index finger on the box.
Our volunteers were familiar with digital technologies without being experts. They had devices such as computers, smartphones, talking watches, talking scales, etc… These persons moved easily and independently around the city. Moreover, our two volunteers did not use the same travel aid. They either use a cane to navigate and avoid obstacles or are assisted by a guide dog who knows how to direct them.
These differences have an impact on the information to be put on the map. Another remark about our two volunteers, they both know how to read Braille. We will discuss this point later in this article.
2.2.4 - Location and procedure of the sessions
To carry out our sessions, we settled in an office of the UTC research centre. We arranged a desk near the entrance to facilitate the volunteer’s movements. Alternatively, with the agreement of a volunteer, we conducted sessions at her home, allowing us to carry out experiments in a real situation. For more precision, our volunteer was comfortably settled in front of a big table in her living room. We can consider that this arrangement would later be the real place and situation of the Tactos use.
Each session lasted 2 to 3 hours on average. The researcher began by placing the person in front of the computer and giving them all the necessary equipment (fig 6 : MIT5, mouse or smartphone). He then explained what the user was going to have to do as an activity (learn with tutorial, explore a map, etc…). The researcher either informed them of the changes compared to the previous session (such as having oriented the map in the travel direction) or he let the user discover them themselves to note the reactions and comments to these changes, additions, improvements. Thus, user’s comments were noted.
We invited users to say out loud everything that came to their mind while using the Tactos system. These feedbacks allowed us to adjust and review the functionalities but also the way we created our maps. Beyond these thoughts expressed aloud, discussions between users and researchers completed the sessions. The relevance and the efficiency of the functionalities were discussed.
To avoid omitting important information during discussions, we took care to record the session with a Dictaphone (using a smartphone).
Following this, a transcription of the session was made by a researcher to identify the relevant elements during the discussions. Once the transcription was completed, a speech analysis was carried out to find these relevant points which were sent to the developers to prioritize the additions to be created and the problems to be corrected in the IT development.
We repeated the process to improve the device from session to session. Therefore, week after week, we reworked the content and the Tactos device to match the mental mechanisms of visually impaired people.
2.2.5 - Materials
Computers and smartphone
All along the experiments, we used two computers based on Linux (HP ZBook 17) and Windows (Asus VivoBook 17) operating systems. The first one allowed to test a relative mode which is a movements’ control by displacements meanwhile the second allowed an absolute mode which is a control by positions.
In the first case, the movement is relative to the current cursor position. In the second case, it is the absolute position on the control surface (touch screen, graphics tablet) which controls the position in space on the screen.
In other words, for a relative mode, we use a mouse whereas with an absolute mode, we directly use a finger (or a stylus) on a sensitive surface.
To use the absolute mode properly, we could not touch the computer screen (our computer did not have a touchscreen) or use its touchpad due to technical reasons (such as bijection principle between this surface and the screen surface) and the size ratio between touchpad and screen. We had to go through other means. That’s why we used a smartphone (Samsung Galaxy) to explore the computer screen in an absolute mode.
To do this, we downloaded the free Tuiodroid application from the Play Store, which allowed us to send the fingers’ positions on the smartphone to the computer. Then we internally developed an application (composed of two software modules : fig 7.) which made possible the retrieving and processing of these positions.
2.2.6 - Contents
Tutorials
For a novice Tactos user, we had the idea to develop and build a tutorial where the user learns how to use the tool properly and effectively. Concepts such as learning to navigate, exploring shapes or using features have been integrated into the different versions of the tutorial. A speech synthesis was launched with each change of interface and said which shape appeared in the middle of the screen (fig 8).
Two columns on the left and right sides of the screen allowed to switch interface (and therefore to change shapes). When the cursor hovered over the areas, it respectively said, “previous page” and “next page”. To perform the action, the user just had to left click with the mouse (or tap on the smartphone screen).
When the user was lost during the screen exploration (especially for the relative mode), he had the possibility to click on the left button of the MIT5 to hear the shape localization. For example, the vocal synthesis could say “the shape is further to the left” or “the shape is below”. These clues to the shape localization depended on the colours of the areas where the user clicked on the screen. In the interface, only the shape was tactile. The other colours only had sounds (immediately or by user action).
Maps
Knowledge of a place does not seem necessary to explore a map. That is why, first, to identify a good transcribing between reality and virtual, we began by offering to blind people areas that they already knew.
The maps we submitted were areas of neighbourhoods in the cities of Compiègne and Paris. The paths were relatively short (approximately 15 minutes’ walks in real life). We exported a real map area as .PNG (then .SVG) from the OpenStreetMap website. Then, we manually retouched it using free drawing editing software (Gimp for .PNG and Inkscape for .SVG). This allowed us to add additional elements such as the route to follow or contextual information (fig 9.).
To do this, we associated a hexadecimal colour (e.g.the Post office was associated to the yellow colour “F7F309”) to a configuration in the Tactos_Config software. So, theoretically, we had over 16 million possible associations of information with colours. When we associated a colour with a configuration, we could adjust whether the braille cells rose or not, put words or a sentence in the speech synthesis (or choose a sound), and decide on the occurrence of the speech synthesis (immediate when the receptor field passes above this position, by mouse click, by MIT5 button click).
3 - Results
In the first version of the tutorial, we proposed too many shapes. In consequence, our volunteers found it tedious and annoying. We clearly didn’t get the results we wanted. Conversely, for a first use, we hoped to offer a fun and interesting tutorial that makes the user want to explore. Therefore, with the help of our visually-impaired volunteers, we identified the limitations and issues of this first version of the tutorial.
Subsequently, we lightened the tutorial and grouped together the shapes and functionalities in the same interface. Likewise, we improved the interface with larger shapes, more indication of their location and with the presence of a red border all around the interface allowing the user to know when he was exiting the screen.
With the tutorial, we also studied the problem of the location of the point of perception in the digital space. Though, differences between relative and absolute mode were observed. For the first mode (relative mode), given the fact that visually impaired users were using a mouse to move around the screen, they could not know where their cursor was.
Even though we added a border to warn users that they were approaching the edge of the screen, that was not enough for them to know at each moment where their cursor was on the screen. Only in the tutorial did we put different colours so that the user would know where the shape on the screen was by pressing the left click on the MIT5.
In other words, in relative mode, the user had no clue to understand and know where his cursor was when they were exploring a map. Therefore, in absolute mode, with the use of the smartphone, the user could know where his cursor was on the screen. When he put his finger at the top right on the screen of the smartphone (in landscape mode), the cursor was indeed at the top right of the computer screen.
One of the limitations of this method is the ratio between the screen size of the smartphone and of the computer. A slight discrepancy was felt. Anyway, with this method, the user understood that it was necessary to put his finger in the centre of the smartphone to find the shape which was centred on the screen. The same was true for map exploration since the path was always centred. As we always invited users to start from the bottom centre and go up, we had thought of putting pellets on the smartphone to facilitate and help the user find the middle on each side of the smartphone.
Once we were able to experience our system with two others visually impaired people who weren’t born blind, we realized that there was a difference between people who were born blind and people who gradually lost their sight. This difference is observed both in different gestures but also in another understanding of space. We will come back to this point later.
When we created a map for the first time, we innocently oriented it using cardinal points. We suggested that visually impaired and blind people would have the same facility to navigate and explore a map regardless of its orientation. However, in spatial coding, the most important factor is to be able to rely on reference information because it allows to keep track of and continuously update objects’ position in space. The cognitive representation of space is given by the reciprocal relationships between entities in the environment. To create reference points during coding, it is interesting and relevant to note the location, distance or directions to take to continue the way (Millar, 1975 ; 1976 ; 1979 ; 1981 ; 1985 ; 2000).
Therefore, when we created our maps, we insisted on information such as streets names, streets length, direction indications, places and landmarks (drugstores, post offices, restaurants, train stations, etc... ; (see the coloured rectangles in Figure 7). These reference points can be either egocentric (individual body/location) or allocentric (external points/location) (Barrett et al., 2001 ; Berthoz, 1991 ; Paillard, 1991 ; Pashler, 1990).
Moreover, it is easier to locate an object, a target, if your body is always in the same position and orientation relative to that object, especially if you are blind. This feeling is lost if you move your body or if someone else moves the object on the map. To simplify the understanding, we subsequently considered that it was preferable for visually-impaired people to have the map set out in their travel direction , the goal being at the top of the screen.
We tested the possibility of making the places tactile (e.g., a church in light blue in figure 9), but as these were very close to the path to follow, it disturbed the user who confused them. From this observation, we decided to only set the paths to follow as tactile. The other colours on the maps were used for contextual information by voice synthesis (depending on the type of configured appearance).
Anyway, to make haptic perception easier, we noticed that the finer the lines on a map, the easier it was to follow them. We started by testing different widths of lines on the map, for example the avenues being represented by wider lines and the alleys being thinner. It was a good idea in theory but the perception and understanding on the MIT5 was not improved and, as our volunteers said, whether the street is narrow or wide is not an important information for them. Based on this observation, we decided that all the street lines had to be as thin as possible (we made them with a width of 1 pixel).
Another relevant feedback from our volunteers is the fact that when we proposed maps and paths, they corresponded more to a trip to do by car rather than a path to be followed on foot. So, either the path was based on streets or it was based on sidewalks. The understanding was affected, mainly at crossroads and pedestrian crossings. Unfortunately, during our sessions, we created maps and areas that did not always have the same scale. It was also difficult for us to correctly report the same scale. We aim at correcting and limiting this problem.
Through our sessions, we have clearly noted that the speech synthesis allowed to accentuate information but above all to help the user to focus on certain positions. Tactos’ speech synthesis could be configured to be heard either immediately or by user action. We made the difference between these two occurrences according to the provided information and the absolute necessity of having it to understand the exploration. Information such as street names is only useful if the user wants it. It is not necessary to have it to explore, to move on the map or to follow a path. That is why we decided to set the switch activating the speech as the left click of the MIT5 (depending on the user’s will). For immediate information, in addition to saying words or sentences (e.g., sentences such as “at the intersection, turn left”), Tactos’ speech synthesis was also able to emit sounds. Initially, we used only one sound to mark all the intersections, but we realized that the user had no clues to distinguish the different forms of intersections. We wanted to reinforce the haptic side of the MIT5 with sounds. From this idea, we classified the intersections in three categories according to their ease of recognition and perception : simple intersections (e.g., in “+” crosses shapes or “T” shapes), complicated intersections (e.g., in “Y” or “X” shapes), and finally, particular intersections (roundabout). Through the tutorial, users learned to associate these three sounds to the different intersections’ configurations. In general, simple intersections are, as their name suggests, easy to recognize by touch but there is a great interest in reinforcing the haptic perception for complicated intersections because in these places, a lot of braille cells from the MIT5 are risen, making their understanding more difficult.
4 - Discussion
One of the limitations of our approach is the fact that the Tactos system only has one action point. When visually impaired people touch an object or scroll through a screen, they will use multiple fingers and even both hands. Using multiple fingers or both hands provides better perception than just using one finger (Morash et al., 2013). There is a debate over how many fingers or hands are enough to explore a map by touch. Some researchers believe that using multiple fingers or hands helps and improves understanding of haptic map perception (Klatsky et al., 1993 ; Lappin & Foulke, 1973 ; Millar & Al-Attar, 2004), while other researchers consider that using only one finger is sufficient for haptic exploration (Jansson & Monaci, 2003 ; Loomis et al., 1991 ; Overvliet et al., 2007). The Tactile Surface Area hypothesis offers the idea that we get more tactile information when we increase the tactile surface (with the addition of more fingers). Moreover, we find a decrease in the interest and effect of adding the next finger on the map as one goes (that is to say that using the middle finger and / or the third finger is less impacting in terms of information than adding the index and so on) (Jansson & Monaci, 2004). However, there is no difference in performance for sighted people who are blindfolded when using one or two fingers when they have to name the designs of common objects (Loomis et al., 1991), name the borders of European countries (Jansson & Monaci, 2003) or when they have to find a target hidden in an array of symbols (Overvliet et al., 2007). If we had only one point of action (in other words, only one receptor field on the screen), this should not cause any problems since there are enough sensitive fields on a finger to feel vibrations and thus infer information (Lenay et al., 2003 ; Summers et al., 2002). To complete this, the user just needs to take more time to explore and have the other information he would have had with other receptor fields.
Another point in our approach is the fact that tactile exploration is done sequentially. In other words, during tactile exploration, the subject has to keep in memory all the elements encountered to associate them and to infer information (Révész, 1950). The more information a person must remember, the more his working memory will be solicited and thus overloaded. A sketchbook metaphor is suggested to evoke the immediate recording of spatial information in working memory (Baddeley, 1990 ; 2000 ; Baddeley & Hitch, 1974).
In a way, the elements and information on a map are somehow related to each other. When the user follows his path, he will encounter the various elements one after the other and will have to memorize them. For example, he will meet a bus stop, then a bakery before arriving at a roundabout. This spatial arrangement follows a sequential order. If the user does not follow this order and decide to explore randomly, he will have difficulties to understand the link between all these elements on the map.
We should also point out that there is a difference between people born blind and people who lose or have gradually lost their sight. We found that people who were born blind and those who lost their sight later on showed differences in navigation but also in the use of computers. These two populations are not only different in their way of thinking but also in their way of navigating. We cannot consider this as a single population. Therefore, Tactos must not simply respond to the needs of people with a visual impairment but also take into account the degree of blindness. The tool must be adapted for these two types of users, hence the difficulty of offering navigation maps. This difference can be explained by the lack of visual experiences of space (Heller et al., 1996). When a person loses his sight and becomes blind, some kind of intermodal compensation is put in place to continue processing spatial information (Fortin et al., 2006 ; Pascal-Leone et al., 2005). Another difference between people who are blind from birth and people who went blind later in life is the fact that people who are born blind have not been able to experience the crossing of different sensory modalities, which begin in the first months of birth. Thus, the sense of touch was not reinforced by vision which makes spatial concepts more difficult to apprehend (Hatwell, 2003). However, when subjects were allowed sufficient time to tactilely explore an environment, the difference in understanding between people who were born blind and those who went blind later in life was no longer observed (Röder & Rösler, 1998). Thus, the exploration duration is a data to be considered.
The two volunteers we followed could read braille. When they explored a shape, a combination of pins was raised or lowered. As a result, they sometimes read braille letters unconsciously. It is interesting to note that in this case, it takes a little time to disregard the information. Unfortunately, this factor is unique to each person. We have no control over that. Users have to concentrate to not read braille letters and that can create cognitive overload (Wickens, 2008 ; Young et al., 2015). According to the statements of our two volunteers, fewer and fewer blind people can read braille since nowadays there are screen readers or applications that allow screen reading for them. Nevertheless, we should keep in mind that for some visually impaired people, this knowledge exists and may require additional cognitive effort.
5 - Conclusion
Despite our small sample of two volunteers, we succeeded in offering a user-centered approach. In fact, Nielsen suggested that with just five users, we can note over 80% of mistakes (Nielsen, 2000). In return, researchers must realise more interviews and tests. Since we had fewer users than expected, we called on them more often to compensate. The interest and determination of our two participants was decisive in understanding the errors in the Tactos system. The user-centered approach made it possible to prioritize the corrections to be made as well as the main areas of improvement. All the created contents has been adapted and improved thanks to user feedback. In this way, the Tactos development team was able to define rules for creating contents to be directly uploaded to the web.
Soon, a vector map will provide information relating to specific locations on the map (such as the remaining distance in a street). Apart from this future addition, we also aim to test graphs and other maps.
Over the weeks and sessions, the difference between absolute and relative mode of control of the position of the receptor field in the digital space was an important question. Among these two modes, would one be more suited to a situation or a type of users ? Should we avoid offering a navigation mode to a population ? (Relative mode versus absolute mode ?). What about the difficulty of locating yourself both on computer screen and in the real world for a totally blind person ?
A comparative study could be done to understand the interests and relevance of each system, especially for a type of population (born blind, late blind, etc…).
Also, in a future study, we plan to use a computer with a touch screen to avoid gaps in the ratio between the screen size of the smartphone and that of the computer. This way, the absolute mode would also be more advanced since the user will be able to explore a larger area in the map.
In addition, our study questions the importance of sensory input at every moment. Elements such as the size of the sensitive surface, the amount of information delivered and the fact that there are one or more action points whose mobility is different should be studied more precisely.
We can imagine that in a future study, we could dissociate the matrix of 16 receptor fields into two matrices of 8 independent fields. With this configuration, would it be possible to use two fingers to browse the computer screen (with the smartphone or not) and search for information ? However, it should be noted that these two matrices would always correspond to a single finger (index finger) placed on the braille cells of the Tactos box. What about the comprehension and clarity of information ? Could we consider this configuration even if two flows of tactile stimuli, coming from two different active fingers, would be treated by a single receptive finger ?
Acknowledgements
The UAP project team (Université de Technologie de Compiègne) is grateful and warmly thanks the volunteers who took part in the project as well as all the people who were available, solicited and who advised us through the study (among them, Jean-Philippe Mengual and Corentin Voiseux from Hypra Enterprise). We also thank the Banque Publique d’Investissement (BpiFrance) for the financing of the study. We hope that these bonds can be kept and that our collaboration can continue in the future to propose new Tactos system contents.