Résumé

Inequality in the accessibility of IT tools prevents blind people from accessing all the available content and information. Digital inclusion for all people with disabilities has been studied for several years and tools have been developed to fill this gap. In the context of the UAP Project (Universal Access Platform), we developed a new version of a tactile sensory supplementation system called Tactos to allow the exploration of a computer screen by visually impaired and blind people. A user-centered approach has been used through the participation and collaboration of a research laboratory, a digital tools’ company specializing in visual impairments as well as end-users, namely visually impaired and blind people.
This article reports on the approach we took to develop and improve the Tactos system through contents we created : tutorials and maps. Interviews were conducted to understand how visually impaired and blind people get around the city and use technologies such as computers and smartphones. It allowed us to understand their daily lives. Thanks to the interviews, we adapted the system to their needs and their way of thinking. As a result, users have been enlisted in the design process.

Key words : Perceptual supplementation device, Haptic perception, Tactile maps.

Abstract

Inequality in the accessibility of IT tools prevents blind people from accessing all the available content and information. Digital inclusion for all people with disabilities has been studied for several years and tools have been developed to fill this gap. In the context of the UAP Project (Universal Access Platform), we developed a new version of a tactile sensory supplementation system called Tactos to allow the exploration of a computer screen by visually impaired and blind people. A user-centered approach has been used through the participation and collaboration of a research laboratory, a digital tools’ company specializing in visual impairments as well as end-users, namely visually impaired and blind people.
This article reports on the approach we took to develop and improve the Tactos system through contents we created : tutorials and maps. Interviews were conducted to understand how visually impaired and blind people get around the city and use technologies such as computers and smartphones. It allowed us to understand their daily lives. Thanks to the interviews, we adapted the system to their needs and their way of thinking. As a result, users have been enlisted in the design process.

Key words : Perceptual supplementation device, Haptic perception, Tactile maps.

Auteur(s)

Romain Roccamatisi est ergonome et psychologue en cognitive. Il est spécialisé dans l’évaluation ergonomique d’outils numériques et/ou technologiques, l’évaluation des fonctions cognitives des utilisateurs lors d’usages d’outils (conception centrée utilisateur).

Charles Lenay, Professeur de sciences cognitives et de philosophie des sciences, ex-directeur du COSTECH . Habilité à diriger les recherches en Philosophie (17e) et en Histoire des sciences (72e), il consacre l’essentiel de ses recherches aux technologies cognitives : comment les outils participent à l’activité cognitive : raisonnement, mémorisation, perception, interaction,…

Dominique Aubert est ingénieur d’étude spécialisé dans le développement de logiciels pour la recherche cognitive et le design d’interaction.

Tobias Ollive est ingénieur d’étude en informatique. Il est en charge du développement de la partie technique du projet Tactos.

Plan

1 - Introduction

With the advancement of technology and the development of new tools, the possibility for people with visual impairments to know what is on the screen (for a computer or a smartphone), seems more and more reachable (Shao et al., 2017). The accessibility of these devices now allows them to use screens independently without asking for help from another person. In this way, blind people can access digital environment for their personal use but also in the professional field. In the case of a computer, they only use the keyboard keys to move from field to field on the screen. Therefore, it is rather rare for a blind person to use the mouse. Also, to find out what is written on the screen, they can use screen readers such as JAWS or NVDA (NonVisual Desktop Access) which read the sentence by text-to-speech. In recent years, these screen readers have been improved a lot but unfortunately, they cannot always explain the images and shapes that appear on the screen. For this reason, research has focused on how to provide tactile and physical information to visually impaired and blind people through sensory supplementation devices (Lenay, et al., 2003 ; Meijer, 1992 ; Vidal- Verdú & Hafez, 2007 ; Crossan & Brewster, 2008). Blind people have mental images that have common properties with those of sighted people, but, for blind people, knowledge should be constructed through relations between movements and tactile, auditory and olfactory stimuli (Cornoldi & Vecchi, 2000 ; Hatwell, 2003). In a way, the visual information appearing on the screen could be replaced by tactile and auditory stimuli.

Based on previous work and theories, this article will discuss the manner in which we proceeded to offer a user-centered design approach. The UAP Project (Universal Access Platform) aims at creating a tactile sensory supplementation system called Tactos to allow the exploration of a computer screen by visually impaired and blind people (Lenay et al., 2003 ; Sribunruangrit et al., 2002 ; Ziat et al., 2007). Tactos enables the touch of pixel colours of the computer screen using braille cells on a small device. The braille cells react to the colours by moving the pins up or down. In other words, it allows blind people to have access to visual information that they could not have had in the form of touching. This article presents the continuing of the project with a new version of the tactile sensory supplementation device.

Given the fact that the end users of the Tactos system are visually impaired, they do not have the same mental representations to move around as sighted people. It is not appropriate to give them the same instructions of movement as to a sighted person. That is why it was important to take into account their ways of thinking and moving. We established an approach to better adapt the tool. From users’ feedback, the development team was able to define the main design and corrections axes. Thanks to the users’ proposals, ergonomic problems could be identified. The same is true for the recommendations. The relevant elements found were discussed with users and then forwarded to the development team to improve the system.
After introducing the material, we will present the two types of content we created to perfect the design of Tactos : a tutorial and exploration maps.

2 - Theoretical background

The sense of sight seems essential to use a computer and navigate on the screen. However, vision does not seem to be essential for understanding spatial concepts (Aleman et al., 2001 ; Kaski, 2002 ; Thinus-Blanc & Gaunet, 1997 ; Vanlierde & Wanet-Defalque, 2004 ; Tinti et al, 2006). Blind people, like sighted people, can create, imagine and manipulate spatial representations and images (Kerr, 1983). If these two populations have the same imaging capacity, then blind people should be able to understand spatial information on a computer or a smartphone screen. This sense and the lack of visual experience can be compensated in part by a haptic perception (Heller, 2000). Blind people have a habit of touching objects with both hands and multiple fingers to replace visual information taking. For example, by touching a glass on a table, they understand what the object is, where a sighted person would only have to look at the object. It is stipulated that for spatial encoding, spatial organization differs radically in haptics and vision. Haptic space is governed by the individual’s body while vision depends on the spatial coordinates of external elements (Hatwell, 1960 ; Révész, 1950 ; Warren, 1977). With the development of technological tools and computer systems, this essential visual perception can be replaced by other techniques (Toennies et al., 2011). Tactos is itself a perceptual supplementation device which aims at replacing the lack of visual information.

Exploring a relief printed map (e.g., on a thermoformed paper) of an unfamiliar environment can provide equivalent or even better information about the spatial arrangement of an environment or place to a blind person than directly exploring the environment in real life (Bentzen, 1972 ; Blades et al., 1999 ; Espinosa et al., 1998 ; Ungar et al., 2000). It is therefore not necessary to know a place to explore a map.
The use of thermoformed paper for a map still has its limits : a small or very local exploration area, printing cost, etc… In a way, with the Tactos system, these limits can be mitigated. By switching from a paper format to a digital one, a greater diversity of areas and maps to explore would then be available. More so, economically, this would avoid the user having to print each time he wants to explore a place. Note that on a paper map, the person can use multiple fingers at the same time to explore. On the contrary, as we will see, with Tactos, there is only one point of action to access the information.

It was also noted that the effectiveness of using touch maps depends on the user’s hand movement strategies (Blades et al., 1999 ; Berla & Butterfield, 1977). Possibly, inter-individual differences could be seen in these hand movements. Despite the tool, a real navigation or screen exploration learning should help reduce this gap between users. The creation of a tutorial seems necessary so that all users are equal in the task of exploring the screen.
Haptic tracking strategies (using multiple fingers, both hands, or a point of reference) may not be instinctive. Accordingly, these effective haptic strategies may be learned either after a substantial period of experimentation or through explicit learning. A first introduction through a tutorial would allow all users to have the same knowledge and awareness of the tool’s action. Like sighted people, blind people may well have difficulties in their haptic tracking strategies. In a study, researchers asked blind adolescents to draw a line by placing their index finger of their non-dominant hand as a landmark (Berla & Butterfield, 1977). Some blind adolescents were not as good as other blind adolescents at tracking and identifying a country’s borders on a map (Berlá et al., 1976). In fact, they either stop too early or too late.
Therefore, placing this index as a reference allowed them to know that the entire shape had been explored. This landmark also made it possible to know that if the subjects continued to explore and started from this point, they were following an already explored shape. This led to observed better performance in trained blind adolescents compared to those who were not.

2.1 - The Tactos system

2.1.1 - The MIT5

The Tactos box is a small rectangular device comprising two piezoelectric braille cells and two buttons (Lenay et al., 2003). For more details, these two braille cells are composed of two columns of four pins each (making a total of 16 pins). The size of the tool is close to that of a smartphone. During our experiments, we tested and used two models of the Tactos box (fig 1.) : “Module d’Interaction Tactile” (MIT4 and MIT5).

Figure 1 : Modules d’Interaction Tactile (MIT4 above and MIT5 under)

The only differences between these two versions are the size of the box (slightly smaller for the MIT5) and the way to connect it to a computer. Indeed, the MIT4 can only be connected by Bluetooth while the MIT5 can also be connected by USB cable. The system is composed of one effector which gives control to users on the position of a receptor field. The pins of the braille cells will rise or lower as the receptor field (the mouse cursor) moves through a pixel colour according to the defined configuration (fig 2).

Figure 2 : A pin’s configuration on the MIT5 according to the finger’s position on the smartphone screen

In this case, this system makes it possible to follow and explore a shape on the screen. We noticed that the thinner the row of pins is, the better is the shape’s comprehension. Thus, a single row of pins is more tactilely understandable and easy to follow than many risen pins. In previous studies, subjects were able to recognize simple but also complex shapes with the Tactos system through active explorations of shapes by perceptual trajectories in the space of the screen (Lenay et al., 2003 ; Gapenne et al., 2003).

The MIT5, which offers haptic perception, is reinforced by voice synthesis on the computer. Not only does this speech synthesis allow sentences to be pronounced according to a colour, but it is also capable of making sounds. By associating the haptic and the auditory feedback of a map, we reinforced the sensation of crossings, cross-roads or intersections in our maps. This caught the attention of the user who then took the time to explore the place thoroughly.

2.1.2 - “Tactos_GroundTask”

Tactos is a system based on pixels’ recognition under the mouse cursor. We used a “Tactos_GroundTask” module whose principle is to convert colours to a rise of braille cells on the MIT5.
Given the fact that we have 16 braille cells on the MIT5, we decided to address 16 pixels at the time using the mouse cursor. Therefore, a braille cell reacts to one pixel. So, the mouse cursor is transformed into a matrix of 16 receptor fields (fig 3).

Figure 3 : The 16 receptor fields matrix for the cursor.

The user browses the screen with the cursor (either with the mouse or with his finger on the touchpad) with his right hand/index (if he is right-handed) while his left index finger is positioned on the Braille cells of the MIT5. As the user browses the computer screen, the system transforms the pixels under the receptor field into tactile stimulation on the braille cells of the MIT5. With this configuration, the user is indeed able to feel a very small and local part of a shape with his left index finger. There is only one point of action, only one matrix of 16 receptor fields to explore a computer screen. This may be a limit to understand the spatial arrangement of shapes and objects. Would it be possible for the user to explore a map fully and efficiently if only one receptor was available ? It has been shown that even with a small sensitive surface coupled with an active exploration, it is possible to explore the screen to recognize shapes and to infer information (Lenay et al., 2003 ; Summers & Chanter, 2002 ; Allerkamp et al., 2007). To understand the complete shape, the user must move his receptor field through the screen.

2.1.3 - “Tactos_Config” software

In order for the MIT5 to react to the pixel colours under the mouse cursor, we developed the “Tactos_Config” software (fig 4). In this software, we associated a hexadecimal colour to a configuration. For example, we selected the colour “093FF7” and we decided whether we wanted to rise the braille cells (or not), what the speech synthesis said and how to get the sound information (immediately by hovering over the pixel, by clicking in the mouse cursor, on the Tactos box button, etc…). Finally, we could select a sound effect or write a name, a sentence in the corresponding fields.

Figure 4 : Tactos_Config Software

2.2 - Approach and settings

Since the evaluation of a momentary user experience is in most cases not very reliable for predicting user experience in real life, we opted for a longitudinal study. We conducted ten sessions with two users over the span of several months. Each session, we tested new elements while correcting the problems detected during the previous one. Therefore, through sessions, users were able to develop an ease or a better understanding of the system which helped us to better improve Tactos from their feedback.
The primary interest and benefit of this approach is to allow us to understand user experience and their relationship with the system, both evolving over time from early learning to integration into everyday life.

Moreover, given that the user experience is highly dependent on the user’s internal state of mind (e.g., predisposition, expectations, needs, motivations, mood, etc…), the system characteristics (e.g., complexity, purpose, usability, functionality, etc…) and context of use (environment) (Hasseznzahl & Tractinsky, 2006), field studies provide a much more realistic context within which to obtain reliable user experience data (Vermeeren et al., 2010). User experience therefore results from the interaction of a set of factors. That is why we asked one of the users to conduct sessions at her home to identify gestures, thoughts and the use of Tactos in a real-life situation.
One of the main objectives of the study is to make the user autonomous and independent when using a computer. Visually impaired people must be able to use Tactos without the help of another person, especially when connecting and starting the system. Indeed, these people deplore the fact that many products adapted for them require the assistance of another person.

2.2.1 - Design steps

Our user-centered approach consisted of several steps. First, we started by creating content (e.g., exploration maps). After that, we configured Tactos to match our content. We matched the colours used in the maps with the colours configured in the “Tactos_Config” software. When everything was set up for the Tactos system (MIT5 Tactos_Config software and Tactos-GroundTask), we then proposed the content to our users. These people tested and commented on both the Tactos system and the content.

After noting the claims, we analysed them and defined the new design guidelines to be sent back to the development team. As a result, the problems encountered by the user with the Tactos system (e.g., a lack of precision), could be corrected. In parallel, exploration maps could also be reworked (e.g., by proposing finer lines for a better haptic perception of the path to follow). The benefit is therefore double after each experiment. After these changes and new additions, we repeated experiment sessions with users to observe new gestures, behaviours and user impressions. We were repeating this cycle over and over again.
In the creation and development of tools (e.g., technological, digital, mechanical, etc…), different methods are implemented to perfect and make the system as ergonomic and accepted as possible (Boy, 2017 ; Eason, 1995). This collaborative work between the development team and the end users makes it possible to focus on certain elements that would not have been identified or detected if the engineers had created a tool on their own (Sanders & Stappers (2008). Among the methods used to identify these relevant points, researchers use questionnaires, interviews, observations, analysis in words, gestures, etc… Therefore, computer engineers, researchers in cognitive science and visually impaired or blind users have collaborated in this design approach in order to integrate ergonomics and human factors.

2.2.2 - The UAP Project Team

In the research team, we had a researcher in cognitive sciences and philosophy, two IT engineers and a cognitive ergonomist.
The researcher in cognitive sciences and philosophy was the leader of the project. He organized, directed and supervised the project team by giving the axis and direction of the study.
Among the two IT engineers, one of the two has developed and coded the software and participated in the improvement of the Tactos system since its creation. Through the years, he and the researcher in cognitive sciences and philosophy have conducted several studies on the Tactos system (Lenay et al., 2003 ; Ziat et al., 2007 ; Gapenne et al., 2003 ; Tixier et al., 2013).
The second IT engineer was specialized in Linux language and coding. He developed and improved Tactos software and modules on Linux (such as the “Tactos_Config” software).
A cognitive ergonomist created and submitted maps to the users. He interacted with the visually impaired volunteers. He then transmitted the feedback to the development team in order to enhance the Tactos system.

2.2.3 - Participants

For our sessions, we had the participation of two middle-aged women who were born blind. We counted on them for seven months. Such a follow-up also made it possible to observe a learning process. In total, we ended up with 15 to 20 hours of exchanges, discussions and experiments with them. These women knew the Tactos system because they had participated to previous studies years ago (Tixier et al., 2013). Nevertheless, the Tactos box changed a lot (size, height, gripping, etc…fig 5).
In its first version of the MIT, the device was larger, and the grip was different. It was possible to hold the case upright. The box had one series of braille cells on each side so that two people could use the MIT at the same time. This idea was rejected when the new cases were manufactured.
Now, the recent versions (Tactos MIT4 and MIT5) can simply and only be used by a single person by placing their hand flat, putting their index finger on the box.

Figure 5 : The different versions of the “Modules d’Interaction Tactile” (picture above, MIT3 and picture under MIT3, MIT4 and MIT5)

Our volunteers were familiar with digital technologies without being experts. They had devices such as computers, smartphones, talking watches, talking scales, etc… These persons moved easily and independently around the city. Moreover, our two volunteers did not use the same travel aid. They either use a cane to navigate and avoid obstacles or are assisted by a guide dog who knows how to direct them.
These differences have an impact on the information to be put on the map. Another remark about our two volunteers, they both know how to read Braille. We will discuss this point later in this article.

2.2.4 - Location and procedure of the sessions

To carry out our sessions, we settled in an office of the UTC research centre. We arranged a desk near the entrance to facilitate the volunteer’s movements. Alternatively, with the agreement of a volunteer, we conducted sessions at her home, allowing us to carry out experiments in a real situation. For more precision, our volunteer was comfortably settled in front of a big table in her living room. We can consider that this arrangement would later be the real place and situation of the Tactos use.
Each session lasted 2 to 3 hours on average. The researcher began by placing the person in front of the computer and giving them all the necessary equipment (fig 6 : MIT5, mouse or smartphone). He then explained what the user was going to have to do as an activity (learn with tutorial, explore a map, etc…). The researcher either informed them of the changes compared to the previous session (such as having oriented the map in the travel direction) or he let the user discover them themselves to note the reactions and comments to these changes, additions, improvements. Thus, user’s comments were noted.

Figure 6 : The Tactos system equipment for the sessions (computer, MIT5 and smartphone)

We invited users to say out loud everything that came to their mind while using the Tactos system. These feedbacks allowed us to adjust and review the functionalities but also the way we created our maps. Beyond these thoughts expressed aloud, discussions between users and researchers completed the sessions. The relevance and the efficiency of the functionalities were discussed.

To avoid omitting important information during discussions, we took care to record the session with a Dictaphone (using a smartphone).
Following this, a transcription of the session was made by a researcher to identify the relevant elements during the discussions. Once the transcription was completed, a speech analysis was carried out to find these relevant points which were sent to the developers to prioritize the additions to be created and the problems to be corrected in the IT development.
We repeated the process to improve the device from session to session. Therefore, week after week, we reworked the content and the Tactos device to match the mental mechanisms of visually impaired people.

2.2.5 - Materials

Computers and smartphone

All along the experiments, we used two computers based on Linux (HP ZBook 17) and Windows (Asus VivoBook 17) operating systems. The first one allowed to test a relative mode which is a movements’ control by displacements meanwhile the second allowed an absolute mode which is a control by positions.
In the first case, the movement is relative to the current cursor position. In the second case, it is the absolute position on the control surface (touch screen, graphics tablet) which controls the position in space on the screen.
In other words, for a relative mode, we use a mouse whereas with an absolute mode, we directly use a finger (or a stylus) on a sensitive surface.
To use the absolute mode properly, we could not touch the computer screen (our computer did not have a touchscreen) or use its touchpad due to technical reasons (such as bijection principle between this surface and the screen surface) and the size ratio between touchpad and screen. We had to go through other means. That’s why we used a smartphone (Samsung Galaxy) to explore the computer screen in an absolute mode.
To do this, we downloaded the free Tuiodroid application from the Play Store, which allowed us to send the fingers’ positions on the smartphone to the computer. Then we internally developed an application (composed of two software modules : fig 7.) which made possible the retrieving and processing of these positions.

Figure 7 : The application which retrieves user’s fingers’ positions on the smartphone/computer screen

2.2.6 - Contents

Tutorials

For a novice Tactos user, we had the idea to develop and build a tutorial where the user learns how to use the tool properly and effectively. Concepts such as learning to navigate, exploring shapes or using features have been integrated into the different versions of the tutorial. A speech synthesis was launched with each change of interface and said which shape appeared in the middle of the screen (fig 8).

Figure 8 : An interface from the tutorial with a shape in the centre of the screen (« + » symbol)

Two columns on the left and right sides of the screen allowed to switch interface (and therefore to change shapes). When the cursor hovered over the areas, it respectively said, “previous page” and “next page”. To perform the action, the user just had to left click with the mouse (or tap on the smartphone screen).
When the user was lost during the screen exploration (especially for the relative mode), he had the possibility to click on the left button of the MIT5 to hear the shape localization. For example, the vocal synthesis could say “the shape is further to the left” or “the shape is below”. These clues to the shape localization depended on the colours of the areas where the user clicked on the screen. In the interface, only the shape was tactile. The other colours only had sounds (immediately or by user action).

Maps

Knowledge of a place does not seem necessary to explore a map. That is why, first, to identify a good transcribing between reality and virtual, we began by offering to blind people areas that they already knew.
The maps we submitted were areas of neighbourhoods in the cities of Compiègne and Paris. The paths were relatively short (approximately 15 minutes’ walks in real life). We exported a real map area as .PNG (then .SVG) from the OpenStreetMap website. Then, we manually retouched it using free drawing editing software (Gimp for .PNG and Inkscape for .SVG). This allowed us to add additional elements such as the route to follow or contextual information (fig 9.).

Figure 9 : A map created for a session, oriented in the user ‘s travel direction

To do this, we associated a hexadecimal colour (e.g.the Post office was associated to the yellow colour “F7F309”) to a configuration in the Tactos_Config software. So, theoretically, we had over 16 million possible associations of information with colours. When we associated a colour with a configuration, we could adjust whether the braille cells rose or not, put words or a sentence in the speech synthesis (or choose a sound), and decide on the occurrence of the speech synthesis (immediate when the receptor field passes above this position, by mouse click, by MIT5 button click).

3 - Results

In the first version of the tutorial, we proposed too many shapes. In consequence, our volunteers found it tedious and annoying. We clearly didn’t get the results we wanted. Conversely, for a first use, we hoped to offer a fun and interesting tutorial that makes the user want to explore. Therefore, with the help of our visually-impaired volunteers, we identified the limitations and issues of this first version of the tutorial.
Subsequently, we lightened the tutorial and grouped together the shapes and functionalities in the same interface. Likewise, we improved the interface with larger shapes, more indication of their location and with the presence of a red border all around the interface allowing the user to know when he was exiting the screen.

With the tutorial, we also studied the problem of the location of the point of perception in the digital space. Though, differences between relative and absolute mode were observed. For the first mode (relative mode), given the fact that visually impaired users were using a mouse to move around the screen, they could not know where their cursor was.
Even though we added a border to warn users that they were approaching the edge of the screen, that was not enough for them to know at each moment where their cursor was on the screen. Only in the tutorial did we put different colours so that the user would know where the shape on the screen was by pressing the left click on the MIT5.
In other words, in relative mode, the user had no clue to understand and know where his cursor was when they were exploring a map. Therefore, in absolute mode, with the use of the smartphone, the user could know where his cursor was on the screen. When he put his finger at the top right on the screen of the smartphone (in landscape mode), the cursor was indeed at the top right of the computer screen.
One of the limitations of this method is the ratio between the screen size of the smartphone and of the computer. A slight discrepancy was felt. Anyway, with this method, the user understood that it was necessary to put his finger in the centre of the smartphone to find the shape which was centred on the screen. The same was true for map exploration since the path was always centred. As we always invited users to start from the bottom centre and go up, we had thought of putting pellets on the smartphone to facilitate and help the user find the middle on each side of the smartphone.
Once we were able to experience our system with two others visually impaired people who weren’t born blind, we realized that there was a difference between people who were born blind and people who gradually lost their sight. This difference is observed both in different gestures but also in another understanding of space. We will come back to this point later.

When we created a map for the first time, we innocently oriented it using cardinal points. We suggested that visually impaired and blind people would have the same facility to navigate and explore a map regardless of its orientation. However, in spatial coding, the most important factor is to be able to rely on reference information because it allows to keep track of and continuously update objects’ position in space. The cognitive representation of space is given by the reciprocal relationships between entities in the environment. To create reference points during coding, it is interesting and relevant to note the location, distance or directions to take to continue the way (Millar, 1975 ; 1976 ; 1979 ; 1981 ; 1985 ; 2000).
Therefore, when we created our maps, we insisted on information such as streets names, streets length, direction indications, places and landmarks (drugstores, post offices, restaurants, train stations, etc... ; (see the coloured rectangles in Figure 7). These reference points can be either egocentric (individual body/location) or allocentric (external points/location) (Barrett et al., 2001 ; Berthoz, 1991 ; Paillard, 1991 ; Pashler, 1990).

Moreover, it is easier to locate an object, a target, if your body is always in the same position and orientation relative to that object, especially if you are blind. This feeling is lost if you move your body or if someone else moves the object on the map. To simplify the understanding, we subsequently considered that it was preferable for visually-impaired people to have the map set out in their travel direction , the goal being at the top of the screen.
We tested the possibility of making the places tactile (e.g., a church in light blue in figure 9), but as these were very close to the path to follow, it disturbed the user who confused them. From this observation, we decided to only set the paths to follow as tactile. The other colours on the maps were used for contextual information by voice synthesis (depending on the type of configured appearance).

Anyway, to make haptic perception easier, we noticed that the finer the lines on a map, the easier it was to follow them. We started by testing different widths of lines on the map, for example the avenues being represented by wider lines and the alleys being thinner. It was a good idea in theory but the perception and understanding on the MIT5 was not improved and, as our volunteers said, whether the street is narrow or wide is not an important information for them. Based on this observation, we decided that all the street lines had to be as thin as possible (we made them with a width of 1 pixel).
Another relevant feedback from our volunteers is the fact that when we proposed maps and paths, they corresponded more to a trip to do by car rather than a path to be followed on foot. So, either the path was based on streets or it was based on sidewalks. The understanding was affected, mainly at crossroads and pedestrian crossings. Unfortunately, during our sessions, we created maps and areas that did not always have the same scale. It was also difficult for us to correctly report the same scale. We aim at correcting and limiting this problem.

Through our sessions, we have clearly noted that the speech synthesis allowed to accentuate information but above all to help the user to focus on certain positions. Tactos’ speech synthesis could be configured to be heard either immediately or by user action. We made the difference between these two occurrences according to the provided information and the absolute necessity of having it to understand the exploration. Information such as street names is only useful if the user wants it. It is not necessary to have it to explore, to move on the map or to follow a path. That is why we decided to set the switch activating the speech as the left click of the MIT5 (depending on the user’s will). For immediate information, in addition to saying words or sentences (e.g., sentences such as “at the intersection, turn left”), Tactos’ speech synthesis was also able to emit sounds. Initially, we used only one sound to mark all the intersections, but we realized that the user had no clues to distinguish the different forms of intersections. We wanted to reinforce the haptic side of the MIT5 with sounds. From this idea, we classified the intersections in three categories according to their ease of recognition and perception : simple intersections (e.g., in “+” crosses shapes or “T” shapes), complicated intersections (e.g., in “Y” or “X” shapes), and finally, particular intersections (roundabout). Through the tutorial, users learned to associate these three sounds to the different intersections’ configurations. In general, simple intersections are, as their name suggests, easy to recognize by touch but there is a great interest in reinforcing the haptic perception for complicated intersections because in these places, a lot of braille cells from the MIT5 are risen, making their understanding more difficult.

4 - Discussion

One of the limitations of our approach is the fact that the Tactos system only has one action point. When visually impaired people touch an object or scroll through a screen, they will use multiple fingers and even both hands. Using multiple fingers or both hands provides better perception than just using one finger (Morash et al., 2013). There is a debate over how many fingers or hands are enough to explore a map by touch. Some researchers believe that using multiple fingers or hands helps and improves understanding of haptic map perception (Klatsky et al., 1993 ; Lappin & Foulke, 1973 ; Millar & Al-Attar, 2004), while other researchers consider that using only one finger is sufficient for haptic exploration (Jansson & Monaci, 2003 ; Loomis et al., 1991 ; Overvliet et al., 2007). The Tactile Surface Area hypothesis offers the idea that we get more tactile information when we increase the tactile surface (with the addition of more fingers). Moreover, we find a decrease in the interest and effect of adding the next finger on the map as one goes (that is to say that using the middle finger and / or the third finger is less impacting in terms of information than adding the index and so on) (Jansson & Monaci, 2004). However, there is no difference in performance for sighted people who are blindfolded when using one or two fingers when they have to name the designs of common objects (Loomis et al., 1991), name the borders of European countries (Jansson & Monaci, 2003) or when they have to find a target hidden in an array of symbols (Overvliet et al., 2007). If we had only one point of action (in other words, only one receptor field on the screen), this should not cause any problems since there are enough sensitive fields on a finger to feel vibrations and thus infer information (Lenay et al., 2003 ; Summers et al., 2002). To complete this, the user just needs to take more time to explore and have the other information he would have had with other receptor fields.

Another point in our approach is the fact that tactile exploration is done sequentially. In other words, during tactile exploration, the subject has to keep in memory all the elements encountered to associate them and to infer information (Révész, 1950). The more information a person must remember, the more his working memory will be solicited and thus overloaded. A sketchbook metaphor is suggested to evoke the immediate recording of spatial information in working memory (Baddeley, 1990 ; 2000 ; Baddeley & Hitch, 1974).
In a way, the elements and information on a map are somehow related to each other. When the user follows his path, he will encounter the various elements one after the other and will have to memorize them. For example, he will meet a bus stop, then a bakery before arriving at a roundabout. This spatial arrangement follows a sequential order. If the user does not follow this order and decide to explore randomly, he will have difficulties to understand the link between all these elements on the map.

We should also point out that there is a difference between people born blind and people who lose or have gradually lost their sight. We found that people who were born blind and those who lost their sight later on showed differences in navigation but also in the use of computers. These two populations are not only different in their way of thinking but also in their way of navigating. We cannot consider this as a single population. Therefore, Tactos must not simply respond to the needs of people with a visual impairment but also take into account the degree of blindness. The tool must be adapted for these two types of users, hence the difficulty of offering navigation maps. This difference can be explained by the lack of visual experiences of space (Heller et al., 1996). When a person loses his sight and becomes blind, some kind of intermodal compensation is put in place to continue processing spatial information (Fortin et al., 2006 ; Pascal-Leone et al., 2005). Another difference between people who are blind from birth and people who went blind later in life is the fact that people who are born blind have not been able to experience the crossing of different sensory modalities, which begin in the first months of birth. Thus, the sense of touch was not reinforced by vision which makes spatial concepts more difficult to apprehend (Hatwell, 2003). However, when subjects were allowed sufficient time to tactilely explore an environment, the difference in understanding between people who were born blind and those who went blind later in life was no longer observed (Röder & Rösler, 1998). Thus, the exploration duration is a data to be considered.
The two volunteers we followed could read braille. When they explored a shape, a combination of pins was raised or lowered. As a result, they sometimes read braille letters unconsciously. It is interesting to note that in this case, it takes a little time to disregard the information. Unfortunately, this factor is unique to each person. We have no control over that. Users have to concentrate to not read braille letters and that can create cognitive overload (Wickens, 2008 ; Young et al., 2015). According to the statements of our two volunteers, fewer and fewer blind people can read braille since nowadays there are screen readers or applications that allow screen reading for them. Nevertheless, we should keep in mind that for some visually impaired people, this knowledge exists and may require additional cognitive effort.

5 - Conclusion

Despite our small sample of two volunteers, we succeeded in offering a user-centered approach. In fact, Nielsen suggested that with just five users, we can note over 80% of mistakes (Nielsen, 2000). In return, researchers must realise more interviews and tests. Since we had fewer users than expected, we called on them more often to compensate. The interest and determination of our two participants was decisive in understanding the errors in the Tactos system. The user-centered approach made it possible to prioritize the corrections to be made as well as the main areas of improvement. All the created contents has been adapted and improved thanks to user feedback. In this way, the Tactos development team was able to define rules for creating contents to be directly uploaded to the web.
Soon, a vector map will provide information relating to specific locations on the map (such as the remaining distance in a street). Apart from this future addition, we also aim to test graphs and other maps.

Over the weeks and sessions, the difference between absolute and relative mode of control of the position of the receptor field in the digital space was an important question. Among these two modes, would one be more suited to a situation or a type of users ? Should we avoid offering a navigation mode to a population ? (Relative mode versus absolute mode ?). What about the difficulty of locating yourself both on computer screen and in the real world for a totally blind person ?
A comparative study could be done to understand the interests and relevance of each system, especially for a type of population (born blind, late blind, etc…).
Also, in a future study, we plan to use a computer with a touch screen to avoid gaps in the ratio between the screen size of the smartphone and that of the computer. This way, the absolute mode would also be more advanced since the user will be able to explore a larger area in the map.
In addition, our study questions the importance of sensory input at every moment. Elements such as the size of the sensitive surface, the amount of information delivered and the fact that there are one or more action points whose mobility is different should be studied more precisely.
We can imagine that in a future study, we could dissociate the matrix of 16 receptor fields into two matrices of 8 independent fields. With this configuration, would it be possible to use two fingers to browse the computer screen (with the smartphone or not) and search for information ? However, it should be noted that these two matrices would always correspond to a single finger (index finger) placed on the braille cells of the Tactos box. What about the comprehension and clarity of information ? Could we consider this configuration even if two flows of tactile stimuli, coming from two different active fingers, would be treated by a single receptive finger ?

Acknowledgements

The UAP project team (Université de Technologie de Compiègne) is grateful and warmly thanks the volunteers who took part in the project as well as all the people who were available, solicited and who advised us through the study (among them, Jean-Philippe Mengual and Corentin Voiseux from Hypra Enterprise). We also thank the Banque Publique d’Investissement (BpiFrance) for the financing of the study. We hope that these bonds can be kept and that our collaboration can continue in the future to propose new Tactos system contents.


Bibliographie

Aleman, A., Van Lee, L., Mantione, M. H., Verkoijen, I. G., & de Haan, E. H.(2001). Visual imagery without visual experience : evidence from congenitally totally blind people. Neuroreport, 12(11), 2601-2604.

Allerkamp, D., Böttcher, G., Wolter, F. E., Brady, A. C., Qu, J., & Summers, I. R.(2007). A vibrotactile approach to tactile rendering. The Visual Computer, 23(2), 97-108.

Baddeley, A. D.(1990). The development of the concept of working memory : implications and contributions of neuropsychology.

Baddeley, A. D.(2000). Short-term and working memory. The Oxford handbook of memory, 4, 77-92.

Baddeley, A. D., & Hitch, G. (1974). Working memory. In Psychology of learning and motivation (Vol. 8, pp. 47-89). Academic press.

Barrett, D. J., Bradshaw, M. F., Rose, D., Everatt, J., & Simpson, P. J.(2001). Reflexive shifts of covert attention operate in an egocentric coordinate frame. Perception, 30(9), 1083-1091.

Bentzen, B. L.(1972). Production and testing of an orientation and travel map for visually handicapped persons. New Outlook for the Blind, 66(8), 249-55.

Berlá, E. P., Butterfield Jr, L. H., & Murr, M. J. (1976). Tactual reading of political maps by blind students : A videomatic behavioral analysis. The Journal of Special Education, 10(3), 265-276.

Berla, E. P., & Butterfield Jr, L. H.(1977). Tactual distinctive features analysis : Training blind students in shape recognition and in locating shapes on a map. The Journal of Special Education, 11(3), 335-346.

Berthoz, A. (1991). Reference frames for the perception and control of movement.

Blades, M., Ungar, S., & Spencer, C. (1999). Map use by adults with visual impairments. The Professional Geographer, 51(4), 539-553.

Boy, G. A.(Ed.). (2017). The handbook of human-machine interaction : a human-centered design approach. CRC Press.

CORNOLDI, C., & VECCHI, T. (2000). Cécité précoce et images mentales spatiales. : Perceptions haptiques et représentations spatiales imagées. In Toucher pour connaître. Psychologie cognitive de la perception tactile manuelle (pp. 175-189).

Crossan, A., & Brewster, S. (2008). Multimodal trajectory playback for teaching shape information and trajectories to visually impaired computer users. ACM Transactions on Accessible Computing (TACCESS), 1(2), 1-34.

Eason, K. D.(1995). User-centred design : for users or by users ? Ergonomics, 38(8), 1667-1673.

Espinosa, M. A., Ungar, S., Ochaı́ta, E., Blades, M., & Spencer, C. (1998). Comparing methods for introducing blind and visually impaired people to unfamiliar urban environments. Journal of environmental psychology, 18(3), 277-287.

Fortin, M., Voss, P., Rainville, C., Lassonde, M., & Lepore, F. (2006). Impact of vision on the development of topographical orientation abilities. NeuroReport, 17(4), 443-446.

Gapenne, O., Rovira, K., Ali Ammar, A., & Lenay, C. (2003). Tactos : Special computer interface for the reading and writing of 2D forms in blind people. Universal access in HCI, inclusive design in the information society, 10, 1270-1274.

Hassenzahl, M., & Tractinsky, N. (2006). User experience-a research agenda. Behaviour & information technology, 25(2), 91-97.
Hatwell, Y. (1960). Étude de quelques illusions géométriques tactiles chez les aveugles. L’année psychologique, 60(1), 11-27.

Hatwell, Y. (2003). Le développement perceptivo-moteur de l’enfant aveugle. Enfance, 55(1), 88-94.

Heller, M. A. (2000). Touch, representation, and blindness.

Heller, M. A., Calcaterra, J. A., Burson, L. L., & Tyler, L. A.(1996). Tactual picture identification by blind and sighted people : Effects of providing categorical information. Perception & psychophysics, 58(2), 310-323.

Jansson, G., & Monaci, L. (2003). Exploring tactile maps with one or two fingers. The Cartographic Journal, 40(3), 269-271.

Jansson, G., & Monaci, L. (2004). Haptic identification of objects with different numbers of fingers.

Kaski, D. (2002). Revision : Is visual perception a requisite for visual imagery ? Perception, 31(6), 717-731.

Kerr, N. H.(1983). The role of vision in« visual imagery » experiments : evidence from the congenitally blind. Journal of Experimental Psychology : General, 112(2), 265.

Klatzky, R. L., Loomis, J. M., Lederman, S. J., Wake, H., & Fujita, N. (1993). Haptic identification of objects and their depictions. Perception & psychophysics, 54(2), 170-178.

Lappin, J. S., & Foulke, E. (1973). Expanding the tactual field of view. Perception & Psychophysics, 14(2), 237-241.

Lenay, C., Gapenne, O., Hanneton, S., Marque, C., & Genouëlle, C. (2003). Sensory substitution : Limits and perspectives. Touching for knowing, 275-292.

Loomis, J. M., Klatzky, R. L., & Lederman, S. J.(1991). Similarity of tactual and visual picture recognition with limited field of view. Perception, 20(2), 167-177.

Meijer, P.B.L. An experimental system for auditory image representations. Biomedical Engineering, IEEE Transactions on 39, 2 (1992), 112–121
Millar, S. (1975). Spatial memory by blind and sighted children. British Journal of Psychology, 66(4), 449-459.

Millar, S. (1976). Spatial representation by blind and sighted children. Journal of Experimental Child Psychology, 21(3), 460-479.

Millar, S. (1979). The utilization of external and movement cues in simple spatial tasks by blind and sighted children. Perception, 8(1), 11-20.

Millar, S. (1981). Self-referent and movement cues in coding spatial location by blind and sighted children. Perception, 10(3), 255-264.

Millar, S. (1985). Movement cues and body orientation in recall of locations by blind and sighted children. The Quarterly Journal of Experimental Psychology Section A, 37(2), 257-279.

Millar, S. (2000). ``Modality and mind : convergent active processing in interrelated networks as a model of development and perception by touch’’. Touch, representation and blindness, 99-141.

Millar, S., & Al-Attar, Z. (2004). External and body-centered frames of reference in spatial memory : evidence from touch. Perception & Psychophysics, 66(1), 51-59.

Morash, V. S., Pensky, A. E. C., & Miele, J. A.(2013). Effects of using multiple hands and fingers on haptic performance. Perception, 42(7), 759-777.

Nielsen J. (2000). Why you only need to test with five users. Jakob Nielsen’s Alertbox, March 19, 2000.

Overvliet, K. E., Smeets, J. B., & Brenner, E. (2007). Haptic search with finger movements : using more fingers does not necessarily reduce search times. Experimental Brain Research, 182(3), 427-434.

Paillard, J. (1991). Motor and representational framing of space. Brain and space, 163-182.

Pascual-Leone, A., Amedi, A., Fregni, F., & Merabet, L. B.(2005). The plastic human brain cortex. Annu. Rev. Neurosci., 28, 377-401.

Pashler, H. (1990). Coordinate frame for symmetry detection and object recognition. Journal of Experimental Psychology : Human Perception and Performance, 16(1), 150.

Révész, G. (1950). Psychology and art of the blind.

Röder, B., & Rösler, F. (1998). Visual input does not facilitate the scanning of spatial images. Journal of Mental Imagery.

Sanders, E. B. N., & Stappers, P. J.(2008). Co-creation and the new landscapes of design. Co-design, 4(1), 5-18.

Shao, F., Gao, Y., Li, F., & Jiang, G. (2017). Toward a blind quality predictor for screen content images. IEEE Transactions on Systems, Man, and Cybernetics : Systems, 48(9), 1521-1530.

Sribunruangrit, N., Marque, C., Lenay, C., Gapenne, O., & Vanhoutte, C. (2002, October). Braille Box : analysis of the parallelism concept to access graphic information for blind people. In Proceedings of the Second Joint 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society] [Engineering in Medicine and Biology (Vol. 3, pp. 2424-2425). IEEE.

Summers, I. R., & Chanter, C. M. (2002). A broadband tactile array on the fingertip. The Journal of the Acoustical Society of America, 112(5), 2118-2126.

Thinus-Blanc, C., & Gaunet, F. (1997). Representation of space in blind persons : vision as a spatial sense ? Psychological bulletin, 121(1), 20.

Tinti, C., Adenzato, M., Tamietto, M., & Cornoldi, C. (2006). Visual experience is not necessary for efficient survey spatial cognition : evidence from blindness. Quarterly journal of experimental psychology, 59(7), 1306-1328.

Tixier, M., Lenay, C., Le Bihan, G., Gapenne, O., & Aubert, D. (2013, February). Designing interactive content with blind users for a perceptual supplementation system. In Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction (pp. 229-236).

Toennies, J. L., Burgner, J., Withrow, T. J., & Webster, R. J.(2011, June). Toward haptic/aural touchscreen display of graphical mathematics for the education of blind students. In 2011 IEEE World Haptics Conference (pp. 373-378). IEEE.

Ungar, S., Blades, M., & Spencer, C. (2000). Can a tactile map facilitate learning of related information by blind and visually impaired people ? a test of the conjoint retention hypothesis. Proceedings of Thinking with Diagrams, 98.

Vanlierde, A., & Wanet-Defalque, M. C. (2004). Abilities and strategies of blind and sighted subjects in visuo-spatial imagery. Acta psychologica, 116(2), 205-222.

Vermeeren, A. P., Law, E. L. C., Roto, V., Obrist, M., Hoonhout, J., & Väänänen-Vainio-Mattila, K. (2010, October). User experience evaluation methods : current state and development needs. In Proceedings of the 6th Nordic conference on human-computer interaction : Extending boundaries (pp. 521-530).

Vidal-Verdú, F., & Hafez, M. (2007). Graphical tactile displays for visually impaired people. IEEE Transactions on neural systems and rehabilitation engineering, 15(1), 119-130.

Warren, D. H.(1977). Blindness and early childhood development. American Foundation for the Blind.

Ziat, M., Lenay, C., Gapenne, O., Stewart, J., Ammar, A. A., & Aubert, D. (2007, July). Perceptive supplementation for an access to graphical interfaces. In International Conference on Universal Access in Human-Computer Interaction (pp. 841-850). Springer, Berlin, Heidelberg.


Citer cet article

Lenay, Charles., Roccamatisi, Romain., Aubert, Dominique., Ollive, Tobias. "A first Approach to Tactile Maps and Web Contents for Visually Impaired and Blind People with a Sensory Supplementation System (Tactos).", 15 janvier 2022, Cahiers Costech, numéro 5.

DOI https://doi.org/10.34746/cahierscostech124 -
URL https://www.costech.utc.fr/CahiersCostech/spip.php?article124