Automating the Digital Universe

22 Feb 2021
 by 
HeaDSpin team
Mobile 
Group

HeadSpin proudly presents Converge: a show about experience, resilience, inclusion, and acceleration in the digital space.

The digital universe, like the physical one it seems, is expanding, and we’re seeing a proliferation of application platforms and an increased need for automation. In this episode, our panelists discuss how they’ve come to solutions to automation requirements, how they see automation as playing more of a role in regular business processes, how to efficiently automate IoT with open source tools, and best practices for testing video streaming.

This panel discussion features Jonathan Lipps (Author of AppiumPro and Director of Learning & Education Programs at HeadSpin), Anna Vasilko (Senior Engineering Manager, Voice & Video SDKs at Twilio), Marie Smith (CIO & Co-Founder of Data 360), Janna Loeffler (Director of Test Engineering at Equinox Media), and Susheel Daswani (Director & Head of Engineering at Citi Ventures Studio.) This session was recorded live from our virtual event Converge on April 20th 2021.

Transcription

Jonathan Lipps: Hello, everybody. I’m Jonathan Lipps, and welcome to this session on “Automating the Digital Universe.” As you’ve heard, I’ll be your moderator. We’re going to have a really awesome and interesting conversation about changes and challenges in automation, and that can mean a lot of things as we’re going to find out when talking to our panelists.

Just a little introductory note. Because I’m a nerd, you might already be aware that our physical universe, you know, the one that we’re in, whether it’s a simulation or not, is constantly expanding in all directions, right? I looked up an article this morning, at recent estimates about 70 kilometers per second per mega parsec, which is like three point something million light years. I know that sounds pretty fast, and it’s accelerating. So I bring that up, because I wonder if it’s maybe a fact about all kinds of universes. Because as we’ve all experienced for quite some time, the digital universe that we also live in, keeps on expanding in an accelerating fashion as well.

So just think about the explosion of digital technologies in the last five or 10 years, continued expanse of digital mobility, increased and wireless speeds. We heard a little bit about 5G this morning, for example, the proliferation of IoT devices and services, the ascendancy of AI and machine learning and data science crossing into the mainstream of virtual reality and augmented reality, consumerization of wearable technology, the at-first slow-and-now-sudden enablement of remote work, and even technologies that are a little more straight out of sci-fi, like self-driving vehicles, and increasingly-scary- looking robots.

I think we don’t really need to talk about the benefits of all of these technological developments, because they’re pretty obvious. That’s why people are making all these new things, and the benefits are usually marketed pretty well on top of that. I typically find it more interesting to think and talk about the risks and challenges of these new technologies — not just because it’s not what people think about usually but because it’s where we enter the realm of unintended consequences, right? We could say something like: no good technology leaves humanity unscathed. As we’ve learned with many of the false starts and security debacles in, for example, the Internet of Things realm.

So for us, as technologists, or people interested in technology, we’re certainly responsible for thinking through these consequences and designing technologies that are not just beneficial but also ethical, and I mean ethical in how we design them and how we use them and the access that’s provided to them across the board. But regardless of benefit or risk, and I love talking philosophically about that, there’s one key component of every new digital technology without which the technology wouldn’t be driven forward. And that’s automation.

Whether we’re talking about a kind of automation which enables a product to be developed or produced or sold, or the automation that enables a product to be tested, automation is super critical. As we gaze out into our expanding digital universe, we can ask the question: What does automation look like for all these new technologies and use cases?

In a lot of industries I’ve observed, especially software design, automation tends to lag behind research and development, which can cause inefficiencies in the production pipeline. This fact was one of the reasons that led me to start work on the Appium mobile automation software, because it was obvious at the time that mobile developers lacked the tools that were making web development so fast and so high quality, or if not always high quality, at least, you know, in the limit based on the available tools.

So my own personal focus for this topic of automation and the expanse of the digital universe continues to be Appium. Still working on it. Our goal as a project there is not limited to just automation of mobile applications, but we want to enable automation of any current or future platform and to make it easy for anyone who wants to test or use a given app or platform in an automated fashion. So I have all kinds of talks and stuff about that on the internet, so you can go look that up if you’re interested.

Now I’d like to turn the focus and spotlight to each of the amazing panelists that we have here and ask them to share a bit about themselves and what automation means to them if I can put it in those trade terms. Of course, as a group, we’ll have some discussion about some interesting questions. If you have questions along the way, please put them in the question interfaceas we’ve been doing all day, and we’ll hope to have time for some interaction with those questions towards the end of the session.

So, again, let me just run through some of the beautiful people that I have here with me, and we’re gonna then go through each of them and hear them talk about some of their backgrounds. So, you know, we’ve got Anna Vasilko here from Twilio, and we have Janna Loeffler from Equinox Media. We have Marie Smith from Data 360 and Susheel Swanee Daswani from Citi Ventures Studio. So you’re going to learn a little bit more about each of these amazing people right now, beginning with Anna. So Anna, kick us off. What is a little bit about your background and what automation means to you?

Anna Vasilko: Yeah, really, it’s really great to be here with you all. My name is Anna Vasilko, and I manage software engineering teams at Twilio. If you’re not familiar, Twilio is a cloud communications platform. Basically, we build technologies which power digital interactions for a lot of companies out there large and small. In fact, in 2020, we reached an important milestone. We powered more than 1 trillion of digital interactions. My teams specifically work on voice and video communications. To give you an example, our voice API is commonly used by companies to, you know, transition out of legacy physical phone systems to modern solutions and web and mobile applications, and our video APIs can be used to build all sorts of video experiences. For example, something similar to this video event.

I’m sure you all know that in 2020 it was a massive year of video adoption, and we really felt it. My teams put a lot of efforts helping healthcare and education companies to move from physical appointments in classrooms to video, sometimes within just a few days. Today we continue seeing more and more new and exciting video and voice use cases popping up across different domains, and t’s fascinating to see this ongoing acceleration. As it’s happening, we also think how to automate testing of all of these use cases. So, happy to talk about this more in this discussion.

Jonathan: Awesome. Thanks, Anna. Let’s, let’s hear from Janna.

Janna Loeffler: Hello, I’m Janna Loeffler, and I’m the Director of Test Engineering at Equinox Media. So, Equinox Media is a fitness and wellness company. We provide a mobile app and video streaming for not only mobile applications, smart TVs, so like say my are SoulCycle At-Home bike also. So automation means a lot to me, because I’m really passionate about enabling teams to really be efficient and effective when it comes to not only testing but quality. For automation, that means how can we get fast feedback to developers through test automation, infrastructure automation, platform automation, all sorts of different kind of avenues that really impact the software development lifecycle. So that’s what automation means to me.

Jonathan: Awesome. Thank you so much. Next, let’s move on to Marie from Data 360.

Marie Smith: Hi, I’m Marie. I’m the Co-founder and CIO of Data 360. I’m a bit of an old timer. I started out with a company called AOL. So lots of fun things about the tech business. I can talk about lots of things about tech business. Today, I’ll say that automation for me has been a study that I’ve done since those days. Right now what it means for me is simplifying complex transactions and making things easier.

90 million American workers are technically illiterate, and 73% of businesses do not know how to use digital technologies properly. So, we have a huge problem in the tech business with accessibility. I don’t know what the global numbers are, but I’m almost certain that they’re worse. What data 360 is doing is we’re building a low-code platform to access very advanced technologies in a simplified way and our vision is really to create a marketplace of marketplaces. This more of a decentralized universe a highly personal universe that sits on top of the similar infrastructure. So we’re very excited to, you know, help people, for example, with ADHD or people with autism or people with learning challenges or people who have traditional blue collar jobs or people who have all kinds of different challenges, be able to access the power of very advanced technologies.

Testing is such a interesting minefield I can talk about all day in our universe, because it’s like making the car for the first time or the airplane. We’re stacking together a lot of different types of technologies. We’re integrated with Google Cloud. We’re integrated with Microsoft Azure, hopefully, in videos, cloud as well. There’s just so many different things happening at the same time that we’re needing to address because we’re looking at how do we reconstruct how technology works. So excited to be here. Thank you.

Jonathan: Yeah, you’re welcome. Before we move on to Susheel, I learned something very important from Marie in our little speaker chat before this, which is the location of apparently the best Starbucks on the planet. So Marie, are you comfortable sharing that information with our audience?

Marie: All Disney properties.

Jonathan: Yeah? I guess it’s true.

Marie: I’ve done the data.

Jonathan: The most magical place on earth in all the important ways. Okay, so Susheel, will you round us out and tell something about yourself, please?

Susheel Daswani: Sure. Thanks. I’m Susheel Daswani, Director of Engineering at Citi Ventures Studio. We like to call it studio. It’s basically a startup incubator at Citi Group. We have two products in market right now. One is WOrthi and one City Builder. Both are focused on social good and trying to increase investment and get people more jobs which is a challenge during this time. My background is I’ve been at small startups. I’ve been at bigger companies. I’ve been at mid-sized startups which became a unicorn and then fell out of favor. I’ve been at Mozilla, which is a nonprofit tech company. Throughout my career, I just harken back to my first job.

To answer your question about what automation means to me is: after my first job, I was working at Bell Labs, and I’ve written this amazing piece of code, and I thought it was great. Of course, in college, you test your software, right? But it’s usually not the forefront, right? My manager said to me, he’s like, great, you wrote the code now test the shit out of it. So I think what automation means to me is how do we scale that testing, and perform it in a way where ultimately what automation is trying to do is drive great user experience? That could be a front end user. It could be it could be an API user. Right?

So what automation means to me is how do you scale that. HeadSpin’s been a great partner, because I think they really allow me to scale that because these consumer products that I have in market now, I have a small team, how do I test it across the range of mobile devices, desktop devices, all the different screen sizes, all the different iOS versions, right? So that’s what automating the digital universe means to me quite concretely.

Jonathan: Awesome, thank you Susheel. I think that really gets us to our first group question. You mentioned how you appreciate some of the ways that HeadSpin enables you to automate and test certain platforms or situations or capabilities that might be difficult. That’s one of the nice things about some of these cloud platforms and services is they can specialize on providing automation support for very particular use cases, like for example, how would we test the quality of this video stream? It’s a very kind-of human subjective measurement. But through machine learning processes and things we can train classifiers on video, and we can make this a service and so HeadSpin has this kind of stuff going on.

What I’m curious to hear from all of you is take something like in your recent work history or something like that, where you had a requirement for automation, whether this was test automation or some other kind of automation, and there wasn’t a kind of an obvious answer out there as to how to test it. Like if we were talking about building a website, you might reach for Selenium or Playwright or Puppeteer or one of these tools. Talk to me about something that there wasn’t just an obvious answer to reach for. I know I was having a conversation with with Anna about this who had some interesting stuff here, so maybe we can start with you Anna, and then whoever else has something to share as well.

Anna: Sure, yeah. So in my teams, we test our product across different browsers or mobile devices, and I’d say that over the years, we went through quite an evolution in terms of where we run those tests and how, and just several years back, we actually had our own physical device lab in the office, because there were no other providers allowing to do it in the cloud, right? Then some solutions like Google Firebase Test Lab started popping up. We were like one of the early adopters of it, and it felt fantastic. All of a sudden, we could run all of our tests on their wide range of devices.

But then pretty quickly, that was already also not enough, because we wanted to be able to run those tests on devices but also in different geographical regions, on different local cellular networks. You search for providers that can do that, and HeadSpin is actually one that allows for that. With all of that stil today, there are a lot of challenges. For example, what we can see that most of the test infrastructure providers are behind in updating to the latest browser versions, or OS versions.

We want to be ahead. We want to know about issues before they get to our users. So, we still end up internally having all sorts of test boards, browser bots, which detect new versions as they appear in depth and better channels, and run our tests, smoke tests, on those. I’m still hoping the test providers will allow for that in the near future. There are a lot of challenges related to automating real life eperiences related to network changes, headset changes. For example, a pretty common use case: you are on a video call on your mobile phone, right? You step out of your house for a walk, and at some point you get out of your Wi-Fi range, and the network handoff happens to your mobile cellular network. This impacts user experience.

We want to test for that automatically, but it’s extremely challenging. In fact, we did have pretty crazy ideas on how to automate it with the use of robots, driving in the office to get into some corners where Wi-fi is unreachable or having the drone sit in on our rooftop and like being programmed flying out of the office and come back. But in all seriousness, looking forward for test providers to develop such capabilities.

There is one whole different dimension, which is Internet of Things environments. So we know our video technologies are being used on IoT devices from various robots to intelligent elevators to gym equipment to doorbells. We do quite a little in terms of testing there today, and I can easily see us in the near future look forward a test infrastructure provider which allow us to cover at least most common IoT environments.

Jonathan: Yeah, thank you, that makes a lot of sense, and it makes me realize I need to write an Appium drone driver so that it’s easy to solve that use case.

Janna: Yeah, I was just going to speak about as far as difficult testing challenges, because I swear, I’ve run the gamut of difficult testing challenges throughout my career. Having worked with Disney on testing some of their robotics and some of those, you know, they’re closed loop systems. You can’t access anything outside your own little air-gapped network to ride control systems, and you’re testing systems who could kill somebody if you’re not careful. You’re in super secret mockup phase. Yu’re under NDAs. So you’re trying to vet tool, partners, and vendor partners without being able to tell them anything about what you’re doing, which is always a fun challenge.

It really comes back to IoT. One of the biggest platforms for testing coming in the future is going to be IoT devices. Our world is getting into experiences and experiences are where everybody is looking. Look at Equinox Media and what we do to differentiate ourselves on that experience. Walt Disney Company. They’re known for their experiences. I worked with Carnival Corporation and the IoT devices that we have on the ships. Another testing challenge: tried to test a Unity game that is on a ship — big steel ship — in the middle of an ocean. So once again, you’re dealing with satellite coverage. You’re dealing with just your Bluetooth devices not being able to reach the sensor, because you’re in a big steel boat, and you’re surrounded by water, which also causes a whole lot of network issues.

When it comes to testing, that’s one thing that I tell all of my new testers and people is that you have to be creative in your testing. You have to look at different ways of being able to test things which may not be what you were taught before that standard testing technique, and just really kind of push back on a lot of vendors and help drive that innovation. I love Appium as a community, and pushing back and saying, we need this, these are the different scenarios. I think the more that we talk about all of these difficult testing challenges, the more we can work together to come up with different solutions.

Jonathan: Absolutely. Thank you for your thoughts. Marie or Susheel any interesting automation challenges you’ve encountered?

Susheel: I think one. I’m actually not going to talk about, because I think Anna covered originally what I wanted to say. I think looking forward in the future. I had an incident here at home recently, where I got this air quality monitor, right, and it’s connected to the internet, it’s essentially an IoT device, right? We got it after the wildfires in California, and my wife was concerned about air quality in the home. I stopped, like, I didn’t enable notifications, but then I checked it out like a month ago. All of a sudden, this one reading called TVOC, which is volatile organic compounds, was just going crazy, right?

So my wife was like: I want to move. Where where are we living? What’s going on? Right? I had a real question in my mind, and I said: is this thing broken? Or is there actually something happening here, right? I think the purveyor of the IoT device, there was no way for me to test, to calibrate, that is this thing actually correct. Right? You know what I mean? What I did was I bought a second air quality monitor, right? Then I noticed that it’s actually it’s wrong. The only thing they could offer me with customer service saying, Oh, we think it’s right, you know, what I mean? Or these different quality monitors have different ways to read it. I know, that’s quite unsatisfying, and I think going forward in the future, we’re going to have to present a way for users to test the status of their their devices themselves, right? So that’s something to definitely think about in the future as you’re building these experiences.

Jonathan: Yeah, that’s a really good point. Everything that you all said so far makes me realize that, especially with IoT devices, there are these really two different layers. There’s the digital layer, and then there’s the physical hardware layer, and testing them is very different. We’re comfortable with the digital part, because we can test that to any degree we want using simulations, but simulations will only get you so far on the hardware part. At some point you actually have to put a roller coaster down the tracks to see if it will fall off, or you actually have to put some smoke in front of the detector to see if it works. Then it raises the point you did like, as consumers, how do we know that these, especially the hardware parts, are still working. There’s not like a wire, you know, that was soldered in correctly inside somewhere. There’s just so many things that could go wrong, but it seems like there is very little in the way of remediation for that. So that’s a challenge.

Janna: Or you just do what I do and take the device apart and say, Does this look correct or not?

Jonathan: Yeah, I would not learn too much from that, but somebody with knowledge of electrons probably would learn a lot more.

Okay, so let’s move on to another question. This is about kind of going beyond test automation. Thinking about all the new sorts of technologies that are coming out or technologies that we’re that we’re working with, how do we see automation as playing more of a role in our regular business processes? So like, Marie, I’d love to hear with you to start on this question because you were talking about how essentially, in my words what your company is trying to do is automate away some of the manual or cognitive complexity of using software that’s valuable to use but can be inaccessible for a variety of reasons. So I’m curious to hear your perspective on that.

Marie: Yeah, it’s interesting right now is one of the things we’re working on is I call it the bad student factor. Sort of the fact that we all have this sort of collective ADD. We don’t pay attention to or we don’t read super wel. Everything comes to be visual. And, you know, what is the balance between getting information of a person that in a visual way that moves a process forward versus talking to them versus having them explain it in their own words.

We’re doing a lot of testing around — because a lot of the internet is text and a lot of business processes is text — how do we best get the most accurate information of a person is such a way that an AI or a machine learning tool can understand it quickly, without a lot of human processing, or cleaning, or data management in the middle. It’s been very interesting. We have literally a 97% fail rate when it comes to filling out forms. Just the basic of: fill out these three pieces of information outside of your name. Can you please get these three things? Then if we ask them can you seek out information, like get an ID number, or click on these three things, and then tell us the result? It’s an epic fail. It’s really interesting.

So we’re going okay. First of all, this is like the number one barrier to tech. Nobody’s reading the instructions. Nobody’s looking at the screen right. Everybody’s just kind of winging it. How do we get them to understand the first step, then the second step, then the third step? It was interesting. We’re kind of the polar opposite of Amazon, right? So Amazon says, Jeff Bezos says: I’m comfortable with being misunderstood for long periods of time. That was kind of like his motto of how he built Amazon. So they would just put stuff out there and hope that people understood it. Whenever they did, they did and if they didn’t, you know, they would roll with it.

So we’re kind of the opposite. We’re like, well, we would like to start with people actually understanding. So it’s been very interesting to test out just the sort of cognitive behavioral aspects in the behavioral economic aspects of business process, and what could make it better. So far, it’s been interesting, we found that success looks more like a half-human approach, half-machine approach. Where it’s almost like if there’s a teacher, there’s students in the classroom doing their homework and there’s a teacher over your shoulder sayingL you missed that. Can you go back and do that? Can you do this or, or with kind of the Facebook approach with a lot of notifications and reminders and things like that. But it’s been interesting to see how people interact with just the basic pieces of information even if the AI is doing the heavy lifting.

Jonathan: Yeah, very interesting. How about the rest of you? What sorts of automation are in place to facilitate your kind of business process outside of the testing attitude?

Janna: We’ve actually done a lot of automation with Slack bots, because a lot of us are remote right now. How do we talk, and how do we interact? We’re getting sick of zoom calls. So, we do a lot of interactions and a lot of things through Slack. So everything from kicking off deployments to asking IT for help to creating test data, everything you know, we’ve started to really kind of automate a lot through Slack to make it more convenient for people and easy for people to use.

Jonathan: Slack. That’s right on.

Marie: Yeah, internally we’re in the opposite space. A lot of our our world is automated. Right? So if you look at our — I hate to confess, if you look at our all of our social media, we get about 4 million users a month, we reach about that many people — it’s all robot. Like 95% is robot, and it looks really great. Sometimes I check out the feed, and I don’t even know what’s on there because the robot did it. Sometimes the robot helps us discover new strategies.

It’s funny. I am also one of those people where I leave all of my cookies on, and I leave everything on just to see what actually happens to my world. The interesting thing is it’s become very accurate. Internally, even with teams even though it’s super annoying, like I’ll say, I really love this product, and then the products on television. Like one time I said — I am a native English speaker, but I’m Afro Latina, I’m actually Asian, Black and Latina — and when I said I’m a Black Latina in a conversation, like, literally, three days later, half my prescription ads were in Spanish.I was like ugh, and I started getting other things start being then I started getting like Jake Balvin, like reggaeton recommendations, and things like that. So it’s been very interesting to kind of see.  There’s some things I don’t let the robot know, and those things never come to me. The things that I let the robots know totally come to me. It’s been very interesting, kind of see, like, I kind of let my professional life stay on the automation train, just to see how it works.

Jonathan: I think that’s wise because it helps us see how we’re the kind of product or receiving end of various kinds of automations. Some of which are helpful to us, and some of which are just funny, or like your example, some of which might actually not be helpful to us. So yeah, that’s, that is really interesting.

We are nearing the end of our time, and I do want to make sure there’s some room for questions. So I think it’d be great to move to questions, and then if we have time left over, I do have a couple more topics that we could throw to you as well. How are we doing with questions? Should I read through what I have in my list or someone else gonna? Let’s see here.

Okay, here’s what I’ve got in the question and answers boxes. Some fairly technical questions. Here’s one that some of you might know something about, which is from Omar: how can we automate IoT devices or applications with open source tools? Is this even a thing? Anybody know anything about that?

Janna: We did a lot of our automation testing, so it wasn’t supposed open source, but we bought Raspberry Pi’s. There are SO many repos out there that have Bluetooth emulations for Raspberry Pi’s, or NFC emulators for Raspberry Pi. So we set up different labs, and we’re able to do a lot of emulation through there. You know, once again, we get to that hardware versus software issue, and you’re always gonna have that. It’s the same thing when you test using mocks and stubs, you know, until you get connected to the real thing, but you can flush out a lot of the issues and just different things using as many emulators, mocks, stubs, testing, at the API level as much as possible.

Jonathan: Yeah, I mean, the only work that I’ve done on this was similar. It involved Raspberry Pi’s. But I mean, kind of the, the reflection, you just shared with us, which is that there are so many layers of the stack, you can choose to kind of cut off reality at any point. Kind of a similar analogy would be for a unit test as compared to like a functional or end to end test, right, with the unit test, you’re saying I don’t care about that reality, I’m just gonna care about this little unit. So there’s actually an open source Appium driver to go with the Raspberry Pi that lets you trigger essentially the GPIO header, so you can turn pins on or off.

That might be enough, like if all you care about is the electrical component of your physical device, then you can test it using Appium. That won’t test whatever is supposed to happen as a result of those electrical signals, like if something physical happens as a result, it’s not going to help you with that. But that might be enough to say: okay, my software is resulting in the correct electronic signals and whatever other hardware components are plugged into my device, I’m assuming that they’re well tested from the manufacturer or whatever. But ultimately, yeah, maybe there’s some open-source robotic testing frameworks that I’m not aware of.

Janna: So, with our SoulCycle bike on the pedals we have a Bluetooth power meter, and that’s how we get a lot of our stats for how you bike. My Raspberry Pi was busy doing some home automation stuff. So I’m like, I don’t want to get on my bike and pedal all the time for classes. So I don’t have it set up now, but I actually built a little  robot that pushed the pedal for me and I just programmed it to push it every five minutes, so the power meter wouldn’t go to sleep. But once again, you got to be creative.

Jonathan: Yep. That’s amazing. Let’s see, another interesting question would be I mean, yeah, one that’s very relevant to we’re experiencing right now. Someone’s wondering if there are any best practices for testing video streaming. So, I don’t know, Anna, if your team is involved in that with Twilio video services at all?

Anna: I’d say that, as of today, Twilio doesn’t provide a live streaming service per say so I’m not an expert in that, but I think just when testing our video APIs and services, we make sure that we test different sizes of rooms, in different environments, in all supported browsers, mobile phones, different operating systems, and doing validation based on specific media metrics. Latency is a big one in any video experience, and basically we can do with injecting some frames which have some encoded information. They decode it on the other end and figure out latency and other characteristics based on that. Or you can take some use of machine learning, and maybe, object recognition for validating either a received stream is of a sufficient quality.

Susheel: Yeah, just want to pipe in on this too. You know, I think automation is not always in digital. It’s not always like computers. When going to video streaming or web browsing, which is something that I worked on at Mozilla, the part that we automated for some testing cases was being on BART, being on a train. I said, look, this is great that this is working in the lab and stuff like that, but I want someone to get on BART and take a trip down and like do web browsing or video streaming on that 60-minute trip, because the automation is obviously the conductor, right, and then the train tracks, right? So it’s not always automation on the digital realm.

Jonathan: Right now, that’s a good point. I mean to touch on this note from a HeadSpin angle, I do know that at HeadSpin, we have a video quality service. It measures something called mean opinion score. I think is what MOS stands for, and it is essentially a machine learning model that’s been trained on different quality videos, and it tries to detect whether there are too many artifacts or whatever in the stream to be considered high quality. Then we also have this — I mean Rajeev will have to comment on whether this is like an actual business or whether it’s a gimmick — but we actually have a backpack full of phones that people can wear and walk around cities in the worlds that you can run tests on while they’re like walking around. So that would be kind of mixing everything that we’re talking about. I’d be curious to see what kind of data would come back from those types of test runs, but it would be interesting for sure. So I’ve noticed that we’re at our endpoint for this particular session. So I just wanted to say thank you to our panelists.

Curious about how HeadSpin can help accelerate development and improve your digital experiences? Contact us.