Episodes
Martez Mott smiling at the camera, wearing a blue pullover.
13 Letters, Rethink the Screen

Rethink the Screen

Martez Mott is a senior researcher at the Ability Group at Microsoft Research, where he works everyday to challenge assumptions and rethink paradigms for how people interact with technology. In one of our favorite interviews yet (on Global Accessibility Awareness Day no less), we chatted with Martez about AR, VR, touchscreens and everything in between.

Episode Transcript

Will Butler:

Hello, everybody. It's Global Accessibility Awareness Day 2021. And you're listening to 13 Letters. The podcast hosted by me, Will Butler, from Be My Eyes, and my co-host Cordelia McGee-Tubb from Salesforce. This year for Global Accessibility Awareness Day, we decided to celebrate with one of our favorite interviews to date. Martez Mott is a researcher and brilliant accessibility mind working behind the scenes at Microsoft to develop what's next in AR, VR and touchscreen technology for people with all types of abilities.

Will Butler:

We get plenty nerdy on this episode talking about everything from World of Warcraft, to the burden of adaptation. But first, a big thank you to our transcript sponsor, Diamond. Diamond makes sure that transcripts for all of the 13 Letters podcasts go up on our website @bemyeyes.com/podcasts. Thank you, Diamond for sticking by us, a fantastic digital agency that also happens to take accessibility and inclusion very seriously. Find out more @diamond.la. And now, I'm going to toss it over to Cordelia, introduce our guest Martez Mott.

Cordelia McGee-Tubb:

Hey, Martez, thanks for being with us today.

Martez Mott:

Thank you both so much for having me. It's great to be here.

Will Butler:

We're super excited. I think we've both been nerding out over your academic output this last week.

Martez Mott:

No, that's very kind.

Cordelia McGee-Tubb:

I recently learned the proper way to read academic papers. So, I was like, "Oh, I'm going to do this. I can read these and understand that." It was great.

Will Butler:

What is that?

Martez Mott:

No, I really appreciate that.

Will Butler:

How do you read an academic paper?

Cordelia McGee-Tubb:

Well, I don't know. Maybe this isn't the proper way, but you read the abstract, and then I skim, and then I go back and read the parts that stood out to me during my skim of the entire paper. I don't know. I'm still learning technique. But since I recently finished grad school, I've been reading a lot of academic papers lately.

Will Butler:

Martez, how's that transition been from academic to researcher at Microsoft?

Martez Mott:

Yeah, I think it's actually been a pretty smooth transition, actually. I think most of the work I still do is still pretty much what you might consider in the academic arena, it's still a lot of research that is publishable. So, things that we submit to external venues outside of the company that we submit for peer review. I still serve on program committees, and review papers and journals and things like that for external conferences. So, I think it's been a pretty great transition.

Martez Mott:

I think it's been interesting working more on the corporate side and getting a better sense of, "Okay, well, this is how Microsoft does design. This is how Microsoft does this engineering," learning more about product teams, and which product teams own which features, and what is the corporate topology look like, if you want to try to get things done. So, I think that part has been probably the most interesting to me so far, probably because it's the most different from what I'm used to from just being in academia for the PhD.

Will Butler:

Are you nervous, maybe sometimes that you're still using too bigger words in your presentation?

Martez Mott:

I hope not. I mean, I think a lot of the times, a lot of these meetings is usually it's like a role list, like a consultant. People might be working on a project or a feature, and they might have some questions around accessibility, so they might reach out to me or other people within my team. Just like, "Hey, we just want to be able to chat with people who work in accessibility to learn more about what's going on." And especially in those situations, I try to keep the academic jargon down to a minimum because I think it can be a trap.

Martez Mott:

I'll be interested to actually hear about Cordelia's experience with this. I did my PhD, it took six years. So, you spent six years in an environment in which there's a lot of people who talk about academic content a lot. So, you get into this habit of falling into academic speak because when you're in class, that's what you do. And when you go home and you read papers, that's what your mindset is constantly on. So, it actually has been pretty refreshing to break out of that.

Martez Mott:

I still have to be in that mode for a lot of the academic writing I do. But I do a lot more internal writing now as well. Like internal memos, one pager, brief summaries of work. Because if I'm doing something but I want a person on the product team to be interested, I don't think the best course of action for myself would be like, "Hey, I'm writing this paper. Once you go read this 50-page paper I wrote and get back to me, and let me know if you have any ideas."

Martez Mott:

I think I can make that information a lot more distilled. Like, "Oh, hey, here's a one-page summary that gives you the core things you need to know about the work we're doing." And I think that's a lot more digestible for people.

Cordelia McGee-Tubb:

Yeah. I mean, there's the whole meta topic of how accessible is the field of accessibility. And one of the topics that people talk a lot about, is even just the word accessibility and how online we shorten it to A11y, which reads strangely to screen readers. That's why our podcast is called 13 Letters because the word accessibility is 13 letters, it's just long. So, there's a lot of nuance in how we talk about these things. So, that's why we're super excited to have you on the podcast to talk, have a conversation about accessibility and your role in it.

Cordelia McGee-Tubb:

But I wanted to jump way back before we get into all this awesome stuff you're doing at Microsoft, to just like, tell us a bit about yourself, Martez. Where did you grow up?

Martez Mott:

Yeah. So, I grew up in Detroit, Michigan. So yeah, that's my family are from. I loved it. I mean, I've been away now since 2006 mostly, when I graduated high school. So, it's a great city. Detroit gets a bad rap. I encourage everyone who hasn't been to go, it's a great city. There's a lot of stuff to do there.

Will Butler:

What sort of kid were you? What sort of stuff were you into as a kid?

Martez Mott:

I was into sports, mostly. I think like a lot of kids, I was a big basketball fan. I was a big football fan. I think I was always into science and technology stuff. I would always say, it was more like a secondhand hobby. I think if I'm going back to think about how I spend most of my time as a kid, I was just outside, the front yard, shooting hoops a lot. It actually was a... so, I'd like this story actually, if I could tell it. I think the thing that I realized that actually got me into computing in general was not making the basketball team.

Martez Mott:

So, in the seventh grade, I had a science teacher who was running a robotics club. And it was just like the Lego mastermind robotics. And I just switched schools. I was new there, and he didn't know me. And I was doing well in the class. And he was like, "Hey, Martez, we're doing this Robotics Club. It'd be great if you joined." And I'm like, "Nah, that's okay. I'm playing basketball." So, the basketball tryouts, and I went out for the team, did everything.

Martez Mott:

And I didn't make the team. So, this is like back when I would actually just put a printed piece of paper of like, "Okay, this was like, who got it, and this is like, who made the team on the door of the locker room." So, that happened. So, then I go to the class either the same day or the next day, and my science teacher was like, "Yeah. So, you didn't make the basketball team. How about robotics? You still interested?" So, at that point, I didn't have anything else.

Martez Mott:

So, I joined the robotics team. And it was great. I mean, it was a great time. We got ready for this robotics competition with other, I'm not sure if it was other middle schools, I'm not sure if it's like a state of Michigan thing. I wasn't quite sure at the time. But we have some competition. They had these rules, "Okay, you have to send your robot essentially on this obstacle course, and it has to go somewhere and pick something up and bring things back." And it was just a lot of fun. And it was the first opportunity I've ever had to program anything.

Martez Mott:

I didn't really know what programming was at the time. I never work with robotics or anything like that. And that just got me really interested in the whole thing, more than I already was on tech stuff. I was like, "Oh, this is something I could actually do." So, after that, I was set on, just like, "Okay, I want to do stuff in technology, robotics, electronics." And I just carried that all the way from middle school, high school all the way to college.

Cordelia McGee-Tubb:

Because you can essentially, you can program a robot to shoot basketballs for you if you wanted to.

Martez Mott:

Exactly. I could get my revenge in other ways for not making the team. Yeah.

Will Butler:

You must have supportive people, supportive family and friends around you to make a pivot like that.

Martez Mott:

Oh, no. Yeah, definitely. I think my family is great. I think my family has always been a group of people who have always like, "Hey, live your life." There was never any pressure on me and my siblings to do or be anything specific. So, when I was interested in sports, my parents were really supportive of that. Take me to sign me up for leagues and games, and buy me the equipment and pay out of fees and stuff. That's all ridiculous that parents have to do for little league sports stuff. But yeah, it was fun.

Martez Mott:

Then yeah, when I changed, I wanted to do more technology focus things. I think for them, they were like, "Yeah, okay, whatever." I think, yeah, going into college, that was something that my parents always stressed. They were just like, "Hey, you're going to go to college." Neither my parents went to college, so for them, it was really important that me and my siblings did. So, that was an important thing. It's something that they made sure would happen.

Will Butler:

You are the first 13 Letters guest to ever say the word sports.

Martez Mott:

Really?

Will Butler:

But I think we actually need to... we need to be talking about this type of stuff more too. Because we get so into the technical stuff, we get hyper focused on design and all this stuff, and we forget that everybody, whether they're the designers or the users... no, actually, we talked about sports with Paul Parravano as well. They're just people who have lives, right? And accessibility factors in everywhere. It's not just about computers and products, right?

Martez Mott:

Yeah, definitely.

Cordelia McGee-Tubb:

But now I want to talk about accessibility because I want to know. I don't know anything about sports well. I went to soccer camp for a week, and I spent the entire time just standing on the sidelines, trying to practice hitting the ball with my head. I don't know if that is why I'm the way I'm now. But anyway, no, Martez, I'm curious, so you were getting into robotics in middle school, when did you or how did you start to get interested in accessible technology?

Martez Mott:

Yeah. So, I think that started in college. So, I had a professor, this guy do catchings, who convinced me to become a computer science major. And so, I went to Bowling Green State University and Bowling Green, Ohio. And at first, I wanted to do engineering, like Electrical Engineering, Computer Engineering, something like that. But they did not have an engineering department, they had a technology, it was like their thing. So, it was like engineering, but not really.

Martez Mott:

And I wasn't really enjoying that program. And as part of that program, I had to take some computer science courses. And the professor there at Duke was like, "Hey, you should do undergraduate research experience. You should change your major to computer science. I think you will be good at it." So, I was like, "Okay." So, I came back the next school year, and that's what I did. And I reached out to Duke, like, "Hey, can we do some research together?"

Martez Mott:

And Duke was like, "I left the university. But here's some other good people that you should reach out to." So, I was working with a group of computer scientists, geologists and colleges on a project that was looking at how to improve the spatial reasoning skills of undergraduate biology students. So, the geology department apparently was having this problem where students get to structural geology. And then, they have really bad retention rates.

Martez Mott:

A lot of people quit the program or stop the program. At that point, when I get to structural geology, and one of their working theories was is that structural geology requires a lot of spatial reasoning skills. So, it was a lot of things. Like, I'm going to give you a two-dimensional topographic map, and I need you to infer what the 3D structure looks like. Or I'm going to give you a cross section of the earth, a block that represents like, "Hey, this is a piece that we took from the earth."

Martez Mott:

And we would have cut this block down the middle to understand the growth of the rock over time or things like that. What do you expect the middle this cross section to look like? So, there's a lot of these tasks that were really difficult for people. So, my first project was building tools to look at, could we improve people's spatial reasoning skills? So, like, "Can we build tools? This like little Java applets and things like that we were building at the time.

Martez Mott:

And the idea would be like, "Hey, we would do these pre-tests." We test people on the structure geology class. We will test people when they come in about how good their spatial reasoning skills are. So, the psychologists knew about all of the standardized tests that were developed to understand people's spatial reasoning skills. So, it's a lot of tests, like, "Hey, I'm going to give you two cubes. You're going to tell me that, are these cubes the same or not based off what's on each face of the cube, and things like that." So, I just got really into that.

Martez Mott:

Because at that time in computer science, you're doing a lot of abstract learning, right? So, you're learning about algorithms and data structures, usually in these very safe environments, these very [inaudible 00:15:12] examples. And this was my first opportunity to work on something that was bigger. And I was also helping people. At least I thought that that was doing something that could potentially help people. And that was interesting to me.

Martez Mott:

And I thought, "Hey, if I'm going to work on technology, build technology, I want to build technology that can help people in some way." So, after wrapping up the work on that project, I just started to think more about accessibility more broadly. The people working with me in computer science, Dr. Poor and Dr. Zimmerman and Dr. Leventhal. They were all interested in a lot of these things that we're doing with accessibility. So, we started a project. I'm not sure if either of you have played World of Warcraft before or know about it.

Cordelia McGee-Tubb:

Yes, I have.

Will Butler:

I did a bit of that in my day.

Martez Mott:

Okay. Okay, great. All right. So, this is always great with people that play because it makes the conversation easier. So, World of Warcraft has these things called addons. Essentially, there are these changes that you can make to the UI of the game, the user interface of the game, essentially.

Martez Mott:

So, we were working on, "Well, how can we create add-ons that you can create using Lua, which is a scripting language? How can we create add-ons that wouldn't make World of Warcraft more accessible to players with low vision or different types of vision-based diseases or things like that, that could be really useful to them if they wanted to play?"

Martez Mott:

So, that idea might be, how can you create the interface that might work really well for someone with macular degeneration, for example? And how might that look different from somebody with diabetic retinopathy or just low vision in general. So, we were trying to go through these design elements and try to say, "Oh, how can we change UI elements and things like that based off of those characteristics of what vision people had to use."

Martez Mott:

So, I thought it was very just interesting work. I just got really obsessed with it, not this World of Warcraft project in general, but just the thought about, "Yeah, I can do good work. Creating technologies that might be able to do some good." So, my mentor at the time at Bowling Green, and we're like, "Hey, if you want to keep doing this, you should get a PhD because you could just keep doing research. You could be a professor. You can do all these different things." So, I said, "Okay, sign me up." And then, that led me to University of Washington.

Will Butler:

Wow. Can you give us an example of what type of filter might be added to World of Warcraft to improve someone's experience? And I'm just so curious.

Martez Mott:

Yeah. So, somebody, for example, with macular degeneration might have good peripheral vision, but some central vision loss. So, the idea there would be, "Okay, well, if I wanted to create an interface based off that type of profile, okay, this person has maybe more peripheral vision and central vision, okay, then maybe some of the things that currently are the way that things are designed about information about a person." That's like in World of Warcraft.

Martez Mott:

An example might be like, "Okay, well, if I'm traveling or if I'm engaged in a battle with another character or non-playable character, or something like that, what information is central and crucial to me? And how can I get that information to the user in a way that doesn't break the immersion that you want them to have from actually engaging in their environment?" So, it was like, I think like, "Okay, this type of things damaged."

Martez Mott:

So, if your character gets damaged, typically, they'll do floating numbers above your character. So, it's identification to let you know, like, "Okay, something's happened." We were like, "Okay." Some of the things we can do is we can take this information and place it elsewhere outside of the central, which would have been the central vision of the character because the camera is always placed around your character, in the center, right?

Martez Mott:

So, the center is this, your character is always going to occupy that place. So, it's not really useful in that scenario, to place that information there. So, essentially, we go through these types of design exercises, just reading a lot about different types of visual elements, and then trying to understand like, "Okay, how can we design a new interface that might be able to counter..." that essentially says, "Hey, we know now a little bit more about this person may be able to do. Let's change our interface to accommodate that."

Will Butler:

Wow. Did these add-ons ever get used?

Martez Mott:

So, we created it. I'm not sure if we ever actually released some. I'm not sure this will come up later. But I think one of the things for me that has been most interesting for me to be at Microsoft Research now is the opportunity to actually be able to create technologies that can get into the hands of more people. So, for this project, World of Warcraft project, projects I did as a PhD student at University of Washington, it's easy to build prototypes, right?

Martez Mott:

It's easy to build a prototype, and you can test it with a small number of users. And you can get some good feedback. And typically, in academia, depending on the contribution you want to make, and also, depending on the type of artifact that you build, those can differ. But essentially, the idea is that, "Hey, I created something. I demonstrated its effectiveness or its usefulness. And now, I can move on to the next thing." To actually get a prototype to work in a real-world running system and all these different things takes a lot of engineering.

Martez Mott:

It takes a lot of integration. It takes a lot of deep understanding about how things work, not just in isolation, but in the ecosystem. And that's something that's been really interesting to me because I think those are a lot of considerations that we don't always make when we do things in academia, because we have the pleasure of being able to sit at a layer above that. We can say, "Hey, imagine this example, or imagine this scenario, and then we can dictate it from there." So yeah. So, to your original question, yeah.

Martez Mott:

So, no, we didn't really have anybody outside of the testing we did, actually use it. Especially at that time, there was already a community of developers building add-ons and things like that for World of Warcraft so we could upload it something and had it there. But I think now, with more open-source software tools and things like that, it would have been much easier to just put on script on GitHub and have a link that people can grab it, or they can fork it and do their own build up and things like that.

Cordelia McGee-Tubb:

And I think too... I hope later we'll talk a little bit more about virtual reality because I know you've been doing a lot of research around that. But just thinking about that particular example of putting important information in central vision versus peripheral vision, this initial project work and research that I imagined feeds in really nicely to virtual reality as well.

Cordelia McGee-Tubb:

When we think about 360-degree visual space, all of that stuff that applies in a two-dimensional World of Warcraft environment, I can see it being really applicable in three dimensional virtual and augmented reality as well.

Martez Mott:

Yeah. No, yeah, definitely. I think anytime you're working in these 2D, these 2D environments, whether you have an embodied experience to VR or you have more third-person experience of having it on a monitor or some kind. Yeah, I think there's a lot of similarities. And I do think this early work that I did has influenced some of my later thinking around some of the accessible augmented virtual reality work that we've been looking at since I started at Microsoft Research.

Will Butler:

So, spatial geology and World of Warcraft. This was your undergrad, right?

Martez Mott:

Yeah, this is undergrad.

Will Butler:

What did you do in your dissertation on?

Martez Mott:

Yeah. So, my dissertation project was around improving the accuracy of touch input for people with physical disabilities. So, the idea, that work is that touchscreens. Touchscreens are a great technology, right? But a touchscreen is all technologies that they make these assumptions about people. So, a touchscreen, for example, will say something like, "Okay, if you want to be able to press a button on a touchscreen, you typically have to do that with a single finger." You can't land four fingers on the button and expect it.

Martez Mott:

Especially if it's a small button, like a typing on a keyboard or something, and expect the system to register accurately. Or if similarly, if you touch with the back of your hand, like if you use three knuckles to try to use the touchscreen, the people who designed how we interact with touchscreens don't assume that people will interact with those devices that way. They assumed that, "Okay, you use a single cleanly extended index finger.

Martez Mott:

They assumed that you might use two fingers or something like that to do pinch to zoom, or two finger swipe or double tap, something like that. So, I was interested in understanding, well, how do people with physical disabilities, so it might be people with cerebral palsy and muscular dystrophy, what challenges, if any, do they experience on using touchscreens? So, the first project I did in this space was called, Smart Touch. And the first piece of this was just understanding like, what are the touch behaviors of people with physical disabilities?

Martez Mott:

So, I've worked primarily with people with cerebral palsy at this place called, Provail. So, Provail in Seattle used to be a United Cerebral Palsy center. And they have an accessible computer lab there, run by this gentleman named Gabe. And it's great. So, Gabe is great. So, they would have clients come in, and they could use the accessible computer lab for an hour, hour and a half.

Martez Mott:

I can't exactly remember the time was. Gabe will be there, there will be other members of the community there. They would play games or do other type of activities. Anything they wanted to do as long as they do it during that hour and a half. So, when I first started this project, the first thing I did was just go, I reached out to Gabe and told him, explaining who I was. And I just wanted to go and observe because I hadn't worked a lot. I mean, at this point, I hadn't worked at all with anybody with physical disabilities.

Martez Mott:

I didn't know what technologies they use. I didn't know what their practices or behaviors or habits was. So, I just went and observed for a month or so. I will go to the accessible computer lab. And I would just hang out in the back and just observe and take notes. And then, when I finally got ready to conduct this project, the idea was like, "Hey, I want to be able to create a testbed that was able to capture people's touch behaviors in a way that they're not trying to perform in a manner that they believe will make the touchscreen react accurately to them."

Martez Mott:

So, if it's most comfortable for me, for example, to interact with a touchscreen, to just use my entire open palm on the touch screen, and if I feel that it's accurate for me to touch with my thumb, but with the screen senses is an entire hand on the screen, that can be problematic. But if that's what people want it to do, we wanted to capture that. So, that's what we did.

Martez Mott:

We essentially created this testbed we had. I can't exactly remember the number of people now, somewhere around 10 or something like that participants who were able to provide us this touch data. And then, from there, what we were trying to understand is like, "Okay, well, we can get a better sense of what are these touch behaviors like, and we can identify that these touch behaviors aren't exactly what the system would assume that it would get and would react accurately to."

Martez Mott:

So, what I did was like, "Okay, well, can we build a model? And use this model to train the test system to respond accurately to people's touch behaviors." So, the same way that you might have to calibrate a voice recognition system or a gaze recognition system, if you're using an eye tracker or something. The idea would be, okay, you can go through a touch calibration process. You buy a new iPhone or Android smartphone, or whatever the case could be, or tablet, and you can imagine, hey, you turn on this accessibility setting and actually pop some balloons that are placed on the screen.

Martez Mott:

You collected the touch data, and we developed this algorithm, this Smart Touch algorithm that essentially can break down that touched data. And from there, like extract the most salient pieces to understand in the future where the person is intending to touch. And it was a great project to work on. It's two years. In retrospect, it doesn't seem like a lot of time. But at the time, as a PhD student, I'm like, I'm seeing my peers published these papers and do all this other stuff.

Martez Mott:

I was like working on this one project, and just trying to get it out there. But it was great. I learned so much from Gabe and the other folks at Provail. I learned so much from the participants who agree to work with me and to help us with that project. So, it was a great experience overall. That led to the next project, which I think it was a similar type of thing, right? So, the thing we did for Smart Touch was on a big interactive table. So, we used the Microsoft PixelSense, big touchscreen. It looks like a television.

Martez Mott:

For our next project, we were interested in understanding touch behaviors on smaller form factors, like tablets and smartphones. But we did a similar type of approach. We collected data first. Train the models. Test to see how well your model works to then test it with real participants in real time to see how well the data is. So, that was my dissertation work. I know, it was all focused on touchscreens, no World of Warcraft, no video games, no geology. But I learned a lot. It was a great experience.

Will Butler:

Yeah. It's an incredible range, though.

Cordelia McGee-Tubb:

Yeah. And I mean, making touchscreens accessible enables all of these other different apps and all these other experiences through the touchscreen. So, maybe people are using that to play World of Warcraft or to do intense geology, spatial stuff that I still am trying to wrap my hand around.

Martez Mott:

Yeah, and that's a great point. One of my participants mentioned that... he really want to use an iPad because there was a certain app that is just like, "Hey, if I want to use this, I need to use it on iOS." And he's just like, "I just can't because iOS is the touchscreen," iOS in particular. But it's like, "Touchscreens are inaccessible to me." So, it was one of those things in which you do realize that.

Martez Mott:

And one of the things I find fascinating about my work, [inaudible 00:30:25] I look at it is that, providing input to devices is important, not just because it provides access to that device, but to your point, Cordelia, it provides access to all of the applications and services that those devices run on.

Martez Mott:

So, it's one of those issues where, for example... so this is another project that we had, that's not related. So, actually, when I was an intern at Microsoft Research, we worked on a project with my friend and colleague, Cynthia Bennett. And she was investigating how young teens with low vision interacted with camera-based applications, but like social media-based camera application, so Instagram, Snapchat, and things like that.

Martez Mott:

And one of the fascinating things about that project opened my eyes to this was that, if these technologies aren't accessible to you, in the case of like Snapchat or Instagram, it's not just like, "Okay, I can't take pictures," there's a whole conversation that's happening on these platforms that you don't have access to, if you're not able to engage the same way that your peers can.

Martez Mott:

And that's even more I think troublesome or scary for teenagers who are worried about not being in a social conversation, not being able to engage on the same type of discourse that their peers are having when they're engaging on these different types of things. I'm not sure the level of discourse they have on Instagram, but whatever.

Will Butler:

It's not academic, I'll tell you.

Martez Mott:

Yeah.

Cordelia McGee-Tubb:

I love to support. I felt like we should do a plug here for one of Will's other podcasts, Say My Meme, which is all about describing memes for people who can't see them. Because memes are such an important cultural phenomenon that's taking place on every social media platform imaginable.

Will Butler:

Absolutely. It's not just because we love memes, it's so that you give people something to talk about. So, we keep people from being left out of the conversation. Yeah.

Cordelia McGee-Tubb:

Yeah, I feel like also... we talked about social media platforms. I mean, I think, touchscreens or devices in general are platforms that allow you to access all sorts of other different things. So, getting that baseline touch experience working for everyone and customizable, oh, my goodness, I love that idea of the training, like voice dictation software, like eye gaze software. That makes so much sense.

Will Butler:

So, Martez, what's the current state of most effective way for people with these types of limitations to interact with their touchscreens?

Martez Mott:

Yeah. So, this is something that is a big point of discussion, just because I think there's a debate. So, for example, so I think right now, to answer your question, most people I think buy assistive devices that will help facilitate that. So, they can buy things like a screen guard, like a touchscreen screen guard that might only give you certain cuts or outlets in certain pieces. So, it makes it so that you can only touch within certain regions.

Will Butler:

On the podcast, we call that a good overlay.

Martez Mott:

Oh, yeah. Yes, yeah. So yeah. So yeah, these overlays, you have things like, people might use a head stylus or a mouth stylus and mouth stakes. So, using a different type of pointing implement, either in the mouth or attached to the head and other body part, things like that. And the reason why... I followed this design philosophy called, ability-based design, which is something that my PhD advisor, Jacob Wobbrock and his colleagues created.

Martez Mott:

And essentially, the idea behind ability-based design is that people have abilities, and you want technologies to be able to recognize or understand those abilities to adapt themselves appropriately to respond to those. So, in the case of touchscreens, these assistive technologies that people have to go out and buy, we call these the burden of adaptation.

Martez Mott:

So, the idea being that, "I bought a touchscreen, but now if I want to use my touchscreen, I have to go out and buy this overlay thing, or I have to go out and buy a mouth stick or a head stick, or whatever the case maybe, just to use it." And replacing the burden on users to conform to the ability demands of the system. So, the system is saying, "We expect people to use a single, clean, extended index finger or whatever finger to interact with the screen.

Martez Mott:

Oh, that's not something you're able to perform well. Here, let me offer you these range of solutions that require you to change yourself in some manner to respond to our narrow, restrictive way of how we think this interaction should go." So, in ability-based design, you will say, "Okay, well, instead of going to the user and saying, 'You need to change something about yourself so that you can use this,' the system should say, 'I need to change something about myself so that the user can just come to me however they are, and I can work appropriately.'"

Martez Mott:

So, I think that's been one of the things that we've been trying to push in this. Hey, if we can get rid of this burden of adaptation, if we can make devices that were smarter, and we're better able to understand and recognize human abilities, those devices could then adapt themselves to be better configured for... my touch behavior, for example, might be different from Will's touch behavior, which might be different from Cordelia's touch behavior.

Martez Mott:

Your touch behavior might be different in the morning than in the afternoon because maybe you're more fatigued. Maybe if you're on medication to reduce something like tremor or spasms, after you take your medication, you might experience less tremor. But as your medication wears off, you might experience more tremor. So, in a case like that, it would be great if your system could understand or recognize like, "Okay, hey, this person is experiencing less tremor, I can relax the interface in a certain way because I don't need to."

Martez Mott:

But then, they're experiencing more tremor now, now we need to rearrange the interface or turn on some options, or do whatever to accommodate that. And while the user is just interacting, we don't want the user to say, "Oh, I'm experiencing tremor right now, I need to change something about me to make this work better."

Will Butler:

Yeah, better to have my tremor glove or my morning iPad.

Martez Mott:

Exactly.

Will Butler:

I've never heard anyone say the burden of adaptation before. In the disability community with a capital D, we think about adaptation is such a positive thing, such an integral skill. We have to be able to adapt. That's our number one capability as people with disabilities, blah, blah, blah, blah, blah. No one's ever turned to me and said, "Actually, that's a burden that your designers are putting on you."

Martez Mott:

Yeah. I mean, I don't think it was for me a similar type of way of thinking, because I think there's this question of, these devices now have more sensors and more capabilities. And they're getting more sensors and more capabilities every year. We can leverage these things, right? We can use the accelerometer in your smartphone to do some interesting things, to understand maybe tremor, for example. And if we can leverage these sensors and these data sources to be able to better provide an experience to a person, I think we want to be able to explore that possible trade off.

Martez Mott:

Because if the idea is that these devices are already within themselves expensive. And we're asking people to go out and acquire additional equipment that's more expensive, that can be lost or broken, or stolen, whatever the case may be, if we can just have, if one part is solution. Because I don't think that the entire solution is that every device needs to be intelligent, not every device is going to have the ability to be intelligent. So, there's always going to have to be some I think room for people to bring other different types of assistive devices to overcome accessibility challenges.

Martez Mott:

But I think the idea, at least the way I like to think about it for some of the things that I'm doing, the idea will touch that I'm doing now with virtual and augmented reality would be like, I think a lot of these things, we can solve some of these problems, if we just did the work of trying to think about how we can adapt to people's abilities. What's the most important thing for us to do in this moment? And how can we understand users at a level that makes it worthwhile to them as well, to be able to offer data.

Martez Mott:

There's a question about data in this situation, too, right? Are you willing to have more passive syncing turned on to give you maybe potential benefit of understanding your context or your situation, and how we can provide more accessibility to people who knows domain? So, I think it's a really interesting question. I think it's something that the ability-based design framework lays out really well.

Martez Mott:

They have these principles that people that designers, researchers, engineers should follow when we're trying to do the design process. And for me, it's been really helpful and helping me think about my work in that way.

Will Butler:

So, these are the folks at University of Washington?

Martez Mott:

Yeah, yeah. Yeah, these folks at University of Washington.

Cordelia McGee-Tubb:

I listened to a talk that you did recently, Martez, where you were talking a little bit about ability-based design, and this idea that every piece of technology has embedded ability assumptions in it, whether explicit or implicit. And that really stood out to me as well of, it's not just that the interface is expecting you to adapt, but it's also making all of these assumptions about you, that might not be the case.

Cordelia McGee-Tubb:

I'm curious, especially for product developers and designers who may be listening to this, how can creators challenge their own ability assumptions when they're creating technology?

Martez Mott:

Yeah. No, that's a great question. Yeah. So, the thing about ability assumptions, I think a useful exercise. When I get the opportunity to teach, I like to do this, which run like, "Okay, just look around the room and just look at all the things around too." So, it doesn't have to be technology devices, it can be anything. So, we try to make the claim that anything that's operated by human, any type of anything, has the ability assumptions embedded with it.

Martez Mott:

So, it doesn't have to be a piece of technology, it could be a door. It could be a doorbell. It could be a handle. It could be a towel. All these different things. I try to tell students when I go through this exercise with them, I say like, "Okay, well, if you can say what the assumptions are..." so for touchscreen, for example, if I can enumerate and go through and say like, "Okay, here are all the assumptions I think touchscreens are making about what people need or should do to interact with them," then I can do the opposite, which is like, I can go through it and interrogate those assumptions.

Martez Mott:

So, for a touchscreen, if it's like, "Hey, a person can interact with a single finger," you can decide, "Okay, well, what if a person can't do that?" They can't use a single finger. What if someone can't perform a gesture like swipe or pinch to zoom? Whatever the case may be. And I think it's useful because you can start to identify where these barriers are. And I think there's always this question too. Like, you want to be able to provide functionality, but the functionality itself is not always the same as the implement, right?

Martez Mott:

So, zoom could be the functionality that you want. And doing the pinch, the two-finger pinch disaster will get you to zoom. But you could also do plus or minus signs that people can press to zoom. You could do all these different things. I think one of the things that we're trying to understand about these ability assumptions are like, "Okay, well, if you have to start removing some of these things that you're taking for granted about how people are going to interact with that, what alternatives are you going to provide to people in which these assumptions still hold true?" Right?

Martez Mott:

So, if there's something that you're identifying as saying like, "Okay, this is a requirement of the system." How narrow is that requirement? How stringent is that requirement? Does it ever require people to have to hear or see or feel? What assumptions are you making? And I think the useful part, I think the later part is that when you can actually go through, and then say, "Okay, I completely didn't take into account a person who might do this, who might do X instead of Y." And I think in practice, I think it's one of those useful things to always do just in terms of... I don't know.

Martez Mott:

I guess it's hard for me to explain it because it's ingrained so much into my research practice that, for me, it's just any other type of task analysis that I might complete, is like, I just implicitly go through this checklist in my head when I'm engaging in a new project. Like, "Okay, what are these assumptions?" I just do it automatically now, but I think it could be a good skill, right?

Martez Mott:

Because I think if you go through that, and you're just start assuming, okay, I'm assuming this, I'm assuming this, I'm assuming this, I'm assuming that, that you can go back and check and say like, "Okay, we assume that people will be able to do this. This may not be true. We need to change this up here already. Because this is too narrow, this is too stringent, or this won't provide the experience that we think it will."

Will Butler:

I'm sitting here staring at my MIDI keyboard like a musical keyboard. and I'm going through that exercise. And the effect is almost psychedelic in some ways because as you start to think about all the different ways that, I mean, you're like, "Oh, the keys are in certain size. I assume someone's hands are of a certain size. Oh, the colors that you're indicating, assuming that one can tell it." The thing starts to change shape right before your eyes.

Martez Mott:

Yeah. The keyboard example is great. Yeah, like the keys. I use that one all the time. And I'll say like, "Yeah, the size of the keys are based off some notion of human physiology." Right? If we were all like Shaq, like Shaquille O'Neal, we all have like Shaquille O'Neal hands, keyboards will be built much differently because I'm pretty sure he has difficulty interacting with a lot of keyboards, right? So, we're making all these assumptions about like, okay, people can do independent movement of fingers, people's fingers are going to be of a certain size and things like that.

Martez Mott:

The counter example I use in the class when I say like, "Okay, what do we assume about a keyboard?" I show an image of a woman who's typing with her feet. And it's like, "Okay. We'll now do the same exercise." Like building the assumptions are being violated when this person is typing with her feet, right?

Will Butler:

It's honestly mind blowing. Because once you start taking away those assumptions, the thing just... I almost feel like I'm hallucinating. Because you can imagine, oh, well, if you assume that someone can make a chord, what if we take away that assumption that someone can make a triad on a keyboard? Right? Well then, why not have another key that's attached to a triad? I've never seen a keyboard that does that. But it could allow people with totally different ways of interacting to play entire songs.

Martez Mott:

Yeah, no, I definitely agree. And I think one of the fascinating things, there's this interesting... ability-based design runs into this interesting... I won't call it... like interesting challenge, I think when it comes to this interface between software and hardware. Because one of the difficulties with... so one of the things that I'm trying to say here is that, to software, we could probably change these things, right? To hardware, I think we'll get there. There's a lot of really smart, brilliant people working on more configurable hardware. But it's harder, for example, to say, like, "Oh, this person has huge hands.

Martez Mott:

My keyboard is now going to physically change its size. The buttons are going to get bigger. And all these different things. It's harder to do that on a physical device, but I think do software, right? I think there's all these different things, especially as we get more towards augmented and virtual reality, and which everything that surrounds the user, especially in virtual reality, in some sense, it's just software that you have these abilities to say, at least the software that the medium between it makes that distinction less, I will say.

Martez Mott:

So yeah. So, I think there was an interesting feature and more configurable hardware that I think the ability-based design will be able to influence people's way that they think about these things. And I think if you do that similar exercise to software, that's primarily what I've been focusing on. But yeah, I think you're right. It's the same thing. Once you stare at these things, you just break them down for a while. You get a better understanding of like, "Okay, what did people assume when they built this?" Right? What assumptions were they making? Or how those assumptions potentially hurt people who don't fit into them?

Will Butler:

Yeah, the hardware is not a good analogy because at least with the software, it's infinitely more configurable.

Martez Mott:

Yeah.

Cordelia McGee-Tubb:

I think too with hardware, there's a lot more sharing of hardware. One example, I work in a senior center computer lab, and so everyone's sharing the same hardware. Once they log into their email, then it can be totally customized to whatever they want. But at the end of the day, that hardware needs to work for this huge array of people. So, if we customize it in one way, then it would need to get recut. Basically, we need magic I think to just change hardware on the fly per person. Maybe that's what's missing in our life.

Will Butler:

Yeah, it's pretty cool. So okay. So, that was just college. But where did you end up at Microsoft? And what was that? What has that been like since you got to Microsoft?

Martez Mott:

Yeah. So, I had built a relationship with people at Microsoft Research. I did a couple internships there. It's great. So, Microsoft Research is the research arm of Microsoft. I think our job is to essentially try to imagine a future, invent the future. And I think for Microsoft, I think the deal is that, hey, the people at Microsoft Research are giving us cover to let us know what's coming down, what's going to be the next hottest thing, what's going to be the next biggest thing in technology.

Martez Mott:

So, you want people working on that the same way Google and Facebook and Apple, all of these big tech companies have these research labs in house for technology transfer and this knowledge production in general. So, it's been great for me, I think. I built this relationship with them. So, I knew people when I started. Like I mentioned earlier, I think it's been an easy transition. Microsoft Research is still very much into the... it's very much steeped in academic community. To be a researcher there, you need a PhD.

Martez Mott:

We bring in interns every year, like hundreds of interns from all over the world every summer. Not these past two summers due to coronavirus and travel restrictions. But yeah, so it was really great. And I think that I get to do the job of essentially creating a research agenda, executing on our research agenda, talking with the people in the Microsoft Research ability team, which is the team that I'm on. Me and the members of our team, essentially, all we were interested in is how can we create more accessible hardware, software, services, how can we essentially improve these experiences for people disabilities.

Martez Mott:

And I think it's a great mission for the team. I was the team got incorporated, and I joined as the first external member of the team. And yeah, I think it's been really great so far. I think my primary research since starting at Microsoft Research has been investigating augmented and virtual reality systems. So, similar to the touchscreen work, is like this idea of, okay, what assumptions do we make when we think that people can engage in a virtual reality environment or an augmented reality experience?

Martez Mott:

And as you might imagine, there's a lot of problems there. And it's been great to add, there's been so many brilliant people that I get to work with. Because my background... I spent a lot of time working on touch systems. I know a lot about those because I spent six years hacking away at different things. And I turned over into VR, AR type of world, and I'm like steeped in unity and all these other different types of programming advice.

Martez Mott:

But it's been great, that I didn't essentially have to start from zero. There's a lot of great people on my team and other teams who have done virtual augmented reality work before. So, it's great to lean on them and to partner with them, and collaborate with them on a lot of these different projects. And like I mentioned, I think also earlier is just that the ability for me that's been really fascinated, I get to talk to people on product teams who are working on accessibility, who are thinking about accessibility or cared deeply about it.

Martez Mott:

And they want to incorporate more accessibility into their team, or they want to get their skip manager really interested in something. And it's been great to have a lot of conversations with people throughout the company who are interested in accessibility, who are interested in just, in general, creating more accessible design patterns, right? Just think about in our design practices and how we can incorporate those more fully into the work that we do.

Martez Mott:

It's been all really fascinating for me, especially since I went to school straight through. I never took any time off. I was an undergrad, master's, PhD. So, this has been nice to get a sense of what research looks like in industry setting and to live at this what we call the research translation gap, which is like, you do research, but research you do get put into products or services. There's this gap that exists of what the initial attempt of the research is, it doesn't may not necessarily meet the vision for the product or the feature or whatever it is.

Martez Mott:

There could be a host of different issues, that things like that. But it isn't really interesting to occupy that gap for a while to learn what those challenges are, and perhaps one day to be able to address some of those.

Cordelia McGee-Tubb:

Yeah. I'm curious since you are now in a company that makes product, how much of your research work is tied to particular products versus more general research on what could happen in technology at large or as a whole?

Martez Mott:

Yeah. So, I think it's a mix of both. I definitely think there's a desire for the work that people do at Microsoft Research to influence Microsoft's current products and services. But I think there's also a desire for Microsoft to want to stay one step ahead of the competition. So, I think the one of the best examples of this right now is in the artificial intelligence machine learning space, where each company is amassing engineering talent and data and resources, the one to be able to build better language models, better computer vision models, better everything, essentially. And I think similarly, some of the things that we do in human computer interaction and design is, we try to envision what the future might look like.

Martez Mott:

So, virtual reality, for example, isn't a dominant form of computing at the moment. It's really niche. It's mostly for gamers. But in 10 years, what if virtual reality systems are just as popular as smartphones? Right? And if that's the environment we live in, in 10 years, asking some of these questions that we're thinking about now about accessibility and how to improve the design of these systems, I think is more future leaning, right? Microsoft itself doesn't produce a VR headset, we have the AR headset, HoloLens. And we have partners I think that use of the Windows Mixed Reality ecosystem, but they produce the hardware like Lenovo and HP and others. So, I think there's definitely a benefit to wanting to say like, "Hey, part of what we do is envision the future." And that future may not happen for 10, 15 years. But when it does, the hope would be that you would have the resources in-house to say, "Hey, we weren't ready for that invention five years ago, but we think we are now. Let's pick this up and run with it."

Cordelia McGee-Tubb:

And you don't want to get to that future 10 years from now, where virtual reality is our main interface, and accessibility hasn't been thought about. So yeah, it seems really important to be thinking about it now. Even though virtual reality has come so far in the past few decades, it's still very much in its early stages, I feel like. So, keeping accessibility top of mind right now is going to just make it stronger in the future. A more robust experience for everyone.

Martez Mott:

Yeah, I definitely agree. And I think that people like to use the example of the iPhone. When the iPhone first came out, there was no voiceover. So, there was no screen reader. So, iPhones essentially just inaccessible to people who are blind, or people need a screen reader. So, you don't want a similar type of experience with virtual reality where all of our, maybe not all, but let's say, a majority or a large quantity of our communication and interaction and work and all these different things occurs in a VR headset or an AR headset.

Martez Mott:

And if we do get into that space in 2030, or whatever the case may be, yes, we don't want to be scrambling to come out with a solution to something that we should be working on. To be honest, we should have been working on it 30 years from now... or 30 years ago. And I think that's one of the biggest challenges. To be honest with these technologies, and I think one of the frustrating things for me personally is that, we're constantly playing catch up. So, for the iPhone example, the iPhone gets released.

Martez Mott:

And at least in academic community, there was this large push for people to try to understand like, "Oh, how can we come up with more accessible interaction methods for touchscreen interactions for blind people?" But the idea should be like, "No, we should have been doing this work in the '70s and '80s, when other people working on touchscreen technologies." I mean, I remember reading papers that I had to cite for my dissertation that were from the 1960s about people doing early experiments on touchscreen technologies and early styluses, and different electronic tablets, and things like that.

Martez Mott:

And the idea that the accessibility, part of that didn't really break into the conversation until 40 years later, is disappointing. And I think that virtual reality, augmented reality is a similar type of thing. People have been doing work on these type of systems since the '70s and '80s. Over the last decade or so, more people have been able to get in and do more VR stuff just because I think the systems themselves have become more readily available.

Martez Mott:

So, it's like, "Oh, you could go buy an Oculus Rift or an HTC Vive. Whereas in like the '90s, there were a few like VR systems you could purchase, but they were probably prohibitively expensive. The things that people are working on, like research universities required, they have a really great understanding of not only the hardware, but computer graphics. But now it's just like, "Oh, I could just get Oculus download. I can buy an ACC by download, like the SteamVR, SDK, and just get working on things right."

Martez Mott:

So, the barrier of entry is a lot lower. So, you're going to see more people into this space. But I still think in a lot of ways our opportunities, the real opportunities to do a lot of the really, really great work, we should have been doing 30 years ago.

Will Butler:

VR and AR is mixed reality. It's such a field of ifs, so many big ifs. And I guess it would be really easy to spend a lot of time working on projects that might be 10 years off or never happened at all. I'm wondering, in your research, have you figured out what is certain, or at least, what is definitely coming down the pike or maybe has already snuck into the way we operate in terms of AR or VR? Just for the purposes of focusing the conversation on where do we prioritize our accessibility efforts right now.

Martez Mott:

Yeah. No, I think that's a great question. So, if we go with the AR example, I think AR is going to be... I think there's going to be so many different types of scenarios for AR to be used to display different forms of content or information to people that they may interact with, they may not interact with. Probably most of the time, they won't interact with it. And I think those are going to be a lot of the uses type of scenarios that AR will present, if you're asking me the first certain type of scenario, right? And I think that AR could go on a lot of different ways.

Martez Mott:

But I do think the maybe, the example that might have, it's like, "Okay, you can imagine an AR glasses form factor that just allows you to look at information and see something that no one else sees, or something. So, I can imagine a scenario where I can look at a physical piece of information that can represent digital content, you and I differently, right?

Martez Mott:

So, it's like, okay, if I'm at a bus stop, and let's say this bus stop has 10 different buses lines or bus numbers that stop at this one particular stop. If I have my AR glasses on, I could look at that information, and it'll be like, okay, it knows that Martez always takes bus 22. I look at that and it just says, "Hey, bus 22 will be here in five minutes." Will look at it. Will take bus 25, and it's like, "Yeah, bus 25 will be here in 15 minutes," type of thing, right?

Martez Mott:

No longer do you get to parse a large wall of text to try to figure out, like, "Okay, when's my bus commuter train come in?" You can just look at something that you can get that information to you. So, I think there's going to be a lot of scenarios like that. And we could just, "Hey, remember that time you have to take out your phone to look up some information? It's just going to pop up to you in some type of way."

Will Butler:

That's fascinating example. Or if Will has a disability that no one has considered, Will looks at the bus stop and sees nothing.

Martez Mott:

Yeah, yeah. So, it could be... I think the piece that's going to be really interesting for AR is this passive information that they can provide. And yeah, to your point where it's like, "Okay, yeah, if we're making assumptions that this information is going to be provided visually, okay, well, what do we do then?" Right? What are the what are the alternatives to how do we provide access to people who want this information, right? Because this information could be beneficial, a lot of different scenarios. But then, the question becomes, "Okay.

Martez Mott:

Well, we need to challenge our assumptions. We can't assume that everyone who wants to use these systems is going to have vision because there could be a lot of different other great benefits that these technologies could provide." I can imagine it could have bone conductive headphones that allow you to not have to wear an in-ear headphones, or whatever the case may be.

Martez Mott:

So, I think there's just going to be a lot of different scenarios that people are going to have to envision accessibility scenarios for. And I think unfortunately, we're not very far along in the visioning or imagining all of those scenarios.

Will Butler:

Is there anything we know for sure about VR? I mean, people are buying Oculus isn't playing Tetris. Is there anything we know for sure about how the future of VR is going to look?

Martez Mott:

That's a great question. I think one of the things with VR that's going to be really great. So, VR could give you a sense of immersion that is hard to replicate in other types of system. So, even based off something like Zoom call or Teams call, like the World of Warcraft example that I gave earlier, so there could be all these different environments that you can reside in. But the level of immersion you get it from being in a VR headset, I think is what people call the killer app or whatever, like the killer feature, right?

Martez Mott:

That's the thing that just, you won't be able to get from an AR System really, like a standalone AR system that has VR capabilities, or you won't be able to get that. It was like any type of standard monitor, things like that. So, I think that this level of presence or immersion that VR provide, it's going to do a lot of things. I think it's going to do a lot of things for telepresence or telehealth, telework, any type of thing that people need to feel embodied in another space.

Martez Mott:

I think VR can be really useful for that. And I think that as hybrid works on, especially for enterprise scenarios, so I'm thinking like Microsoft, big corporations, things like that, you can imagine that these rooms, conference rooms just now have just more sensors and things like that. So, you can use something like the Azure Kinect camera, which has a depth sensor and all these different things to layout or to map the topology of a room.

Martez Mott:

But then, you can imagine that, if I'm at home, and I need to go to this meeting, in which we're doing some type of design-based type of work, I could jump into a VR type of environment based off this reconfiguration of the room from the 3D sensor so then I could appear inside their environment using a projector or some different type of means.

Will Butler:

The deaf person with the HoloLens just gets to decide where their captions are in the room and that sort of thing, right?

Martez Mott:

Yeah, yeah. There's all these different scenarios that just makes it another really complex. And I think gaming is also going to be a really huge application for VR, just because haptics are getting better in VR systems. So, people are going to be able to explore a lot more immersive qualities, not just with visual fidelity, but I think there's going to be more haptic fidelity as well that's going to improve that scenario for gaming and other type of things.

Martez Mott:

So yeah, I mean, I think it's going to be a lot. And for me, the thing that's really interesting for me in this space is just that, there's this question of, we built up an ecology of devices primarily to suit to the interaction. So, any type of desktop, mobile type of interaction, you're interacting on this 2D plane, even though you can have some 3D properties where you can layer windows and things like that in front of each other. But you're not necessarily moving within 3D space.

Martez Mott:

And in VR, in a lot of cases, AR as well, this 3D movement, this 3D interaction is really core to the experience. And my fear for virtual reality, in particular, is that as we increase the fidelity of the virtual world, and we start to port more of the real world into the virtual world, we really have this concern, or we're going to have this problem that we're going to run into this thing, and we're going to make the virtual world just as inaccessible as the real world.

Martez Mott:

And I think that's problematic, that these experiences that we expect people to have in VR. I think a lot of people, similar to movie directors or musicians, people who are like, "No, this movie is meant to be seen in the theater, or the song is meant to be played at a concert." I think there are people who create VR experiences who are like, "No, this experience is meant to be like this.

Martez Mott:

I want you to actually move both your hands like you're actually swinging a sword or you're actually climbing this mountain, and things like that because that's the experience I want the user to have." And what we found from our research is that, people are like, "I just want to be able to enjoy the experience." So, if a person with limited mobility has difficulty using controllers, but still wants to play this climbing game, yeah, okay, maybe the creator really wanted this to feel like the person was climbing Mount Everest or whatever it was, right?

Martez Mott:

They had this idea for what that experience would be like when they're actually using their controllers and their body to move in space and do these different things. But if a person with limited mobility who uses a power wheelchair and maybe has limited mobility in their upper body to do those type of things where if they could use, if they could play that or if they could have that experience using eye gaze, or a combination of gaze and voice or gaze and other means, I think that should be the goal, right?

Martez Mott:

That people can have the experience rather than saying that the goal is this one particular type of experience, which is usually means that, it says, "realistic as possible."

Cordelia McGee-Tubb:

Yeah. It seems to me, the whole point of virtual reality is this idea of limitless possibilities and to recreate the same biases that exists in our real world. It reminds me of... I don't know if you all ever played with barbies as a kid, but there was a barbie dream house which is marketed as a dream house. It's the perfect house. And the Barbie wheelchair doesn't fit through the doors.

Cordelia McGee-Tubb:

And it's like, "Well, if you're the dream house, you should have the biggest, most accessible doors possible. You shouldn't recreate actual structural issues in our world." And I think about that, too, in terms of virtual reality. I remember, I went to a talk a few years ago, just about how there are so many virtual reality games that assume that the person who's playing is standing.

Cordelia McGee-Tubb:

And so, many people are sitting or have shorter statute stature. And there are games that are designed around the idea that someone is an average five-foot, six- or five-foot, eight height. There's just so much, so many, to go back to something you were saying earlier, so many built in ability assumptions and also assumptions about how people want to play.

Martez Mott:

Yeah.

Cordelia McGee-Tubb:

I would also want to toss in there on top of that, one more design premise that you mentioned in your talk. And folks should really go watch your talk from the accessibility of VR meetup from... was it March?

Martez Mott:

Yeah, in March.

Cordelia McGee-Tubb:

But you talked about design for interdependence. And that was something that I'd never heard before. I'm wondering if you could tell us a little bit about the concept there.

Martez Mott:

Yeah. So, this is another great concept that Cynthia Bennett introduced in the HCI and accessibility literature, which is this idea that a lot of people make the assumption that all accessibility work is going towards providing people with disabilities independence. So, they did like, okay, a person can accomplish this task independently. So, that should be to go on.

Martez Mott:

And I think that, and Cynthia and her colleagues, what they identified in their work is that, while providing independence can be a great and noble goal in a lot of situations, there's also a lot of opportunities and designed to take advantage of interdependence, and which people with disabilities work with friends and family members and caregivers and others to accomplish task.

Martez Mott:

And in some ways, if you could design to facilitate easier interdependence, you could also have really great and potentially impactful design solutions that can improve the accessibility of certain things. So, one of the things that came up in a study that we did in terms of trying to ask people about the VR experiences is that gentle people often mentioned working with a caregiver or a family member, and how that would influence their ability to interact in a VR setup.

Martez Mott:

So, we wanted to essentially say that, not necessarily say, "Design for independence is bad," because I think you want to design for both simultaneously. But I think to this point to say, "Hey, there's a couple of examples that we identified in this process that can be really beneficial." So, for example, if you have to set up a boundary or a play area, when you put on your VR headset, so this is the boundary that lets you know, like, hey, you're safe to move or operate in this boundary, but you don't want to go outside this boundary because you might knock over a vase or something like that, right?

Martez Mott:

But in these processes, this thing's like, okay, hey take a controller and draw an area on the floor or surrounding you that will essentially demarcate where the boundaries on it. So, people were mentioning like, "Oh this can be really difficult if I'm tethered in my headset, and I'm using a power wheelchair, and I'm trying to use one hand to manipulate my wheelchair, and I'm trying to use the other hand to point my controller on the ground to set up this boundary area."

Martez Mott:

So, things like that where it could be tedious, but it can also be an opportunity. For example, where we can say like, "Hey, maybe we could design this process better, so they could better." If we designed it for interdependence, maybe this process can be easier for some people in which that's a possibility. So, the idea could be, okay, well, how would this work for example, if a person wants to interact with the VR, but they maybe experienced challenges with setting up the boundary system?

Martez Mott:

How can we have a boundary system that allows another person to easily take the controller, provide feedback between the two people so that they know that they're doing the right thing, and they can engage in such a way that the past is being completed, but in a way that people are working cooperatively together to accomplish that? So, I think that's something that there's a lot of opportunity for in designing. And I think that is something that Cynthia and her co-authors, they really lay out nicely. Like, why interdependence should also be something that we focus on when we do accessibility work.

Will Butler:

Well, it happens all the time in productivity software, right? I mean, like cloud based document collaboration is designed for interdependence, right? We just don't have problem with it because it's not for people with disabilities, right?

Martez Mott:

Yeah, yeah, exactly. I think for the most part, I care a lot about education and teaching. And I think one of the things about accessibility that I think a lot of great organizations like Teach Access and others are doing are trying to get people to the part where accessibility, especially if they're doing anything human facing, accessibility, it's so important to understand, but so few students have the opportunity to really engage with it.

Martez Mott:

And I think people liken it to secure, where you might want your engineers or your developers to all have some understanding of security practices, even though they're not all going to be security experts, I think you also want to get that way with accessibility, not that you're not going to expect everyone to be a subject matter expert, and some accessibility topi, but you want people to have a core enough understanding of accessibility so that when somebody things come up like we've been describing that, people understand that those things need to be addressed and should be addressed.

Martez Mott:

And I think that's something that we just haven't gotten there yet in terms of the education, so that there's more awareness on the side of just everyone who's doing this, like the design process. Instead of having a process where accessibility is thought of as an outside. You don't necessarily want to happen-

Will Butler:

An addon?

Martez Mott:

Yeah, exactly, you want to add on. You don't want to think of it as an outside type of thing that's influencing your product or your service, you want to think of is like core to your product and service.

Cordelia McGee-Tubb:

Yeah, and I was thinking about that when you were talking about your research experiences in undergrad and grad school because I also studied computer science in undergrad, and accessibility just wasn't on the map at all. It wasn't on the curriculum. And it's something that I think a lot of people stumble into through research opportunities, which is knowing the right people who have passion areas, or knowing people in their own lives who care about accessibility and disability access. But yeah, it's not taught in schools, and it needs to be.

Cordelia McGee-Tubb:

So, I'm grateful for more and more people in academia working on it and keeping it top of mind, and providing those opportunities for these young students who are like, "I know I want to use technology to do something helpful, but I don't know what," to stumble into accessibility. But we can't have it be a stumble into it. We need it to be, not whacking people over the head with that, but like, "Hey, this is a part of making technology for humans."

Martez Mott:

Yeah, I agree.

Will Butler:

I'll go out on a limb here and say that I think if you've listened this far, you are probably hoping that your kids will also not make it onto the seventh-grade basketball team. Because this has just been I think one of the greatest conversations we've had. And Cordelia, I'm sure you agree with me. It's like, just the work you're doing is amazing. And everybody needs a research team. That's how I feel at least now. Really, really, really awesome. Cordelia, any other final questions before we wrap? We're just about a time here.

Cordelia McGee-Tubb:

Oh, my goodness, we're already at time. No, this has been amazing, I guess. Yeah, Martez just looking towards the future. I know we got to ask you. What's a certain thing with VR? Just generally, what are you most excited about exploring more in your own research in the next five, 10 years?

Martez Mott:

Oh, wow. That's a great question. Yes. So, I'm really interested... we talked about it a little bit, but this intersection of the hardware and software to me are really fascinating, especially as it relates to VR and AR. So, just really quickly, these input devices that we use, essentially dictate the language that we speak or use to interact with computing devices. So, if you use a mouse and a keyboard, that's going to provide a certain type of language that's different than a smartphone or a tablet. That's going to provide a different type of language, right?

Martez Mott:

The language of taps and swipes. And then, VR, there's still this ongoing conversation about, like, "Well, what is that language going to be like? Is it going to be that, it's like one-to-one real world to virtual world interactions? If I want to hit a tennis ball, I have to swing my arm, like I'm actually swinging in a tennis racket?" And I think the interesting part about that for accessibility is that it gives us this opportunity to look at. You know what? People have a lot of familiarity and expertise with the input devices that they use right now.

Martez Mott:

So, if somebody uses a quad stick or sip-and-puff device, or if they have a custom switch setup that works really well for them, I'm really interested in understanding like, "Hey, how can we take more traditional 2D interfaces, things that people already use, people already have efficiency and are very effective at? And how can we translate those input devices and create interaction techniques or interaction methods that will allow them to use those input devices inside of a virtual world or inside of a VR environment?"

Martez Mott:

And that's a question that we just still haven't really been able to answer yet. It's not a great metaphor to think about, how to use a mouse and keyboard to control an avatar that has two arms and is doing these different things. But I think that's what I'm really excited to investigate over the next couple years, is to really do some great design and hardware work to understand how we can bridge that gap between the hardware that people use, and how we can create alternative methods for that hardware to be leveraged and scenarios in which that hardware wasn't designed for.

Will Butler:

That's brilliant.

Cordelia McGee-Tubb:

Wow. Well, I hope you publish your findings. I look forward to reading them. And yeah, I just want to echo, well, this has been a great discussion. I feel like I've learned so much about World of Warcraft and touchscreens, and everything in between. So, thank you so much for joining us today, Martez.

Martez Mott:

No, thank you both so much for having me. This is so much fun. I can't believe this time flew by so fast.

Cordelia McGee-Tubb:

I know.

Martez Mott:

This is great. I had a lot of fun chatting about my research and background. And thank you both so very much for your thoughtful and insightful questions. I really appreciate it.

Will Butler:

Thanks for listening to our interview with Martez Mott. Be sure to go search the #gaad, it's Global Accessibility Awareness Day today, and every day really. So, get out on Twitter or LinkedIn or whatever your favorite platform is, and see what people are talking about. There's some amazing announcements coming out both on the Be My Eyes front, as well as many other folks who are all talking about what accessibility means to them. Tune in, in two more weeks for another great interview. And you're always welcome to shoot us an email at 13 Letters. That's 13letters.bemyeyes.com. Thanks.