Dirk Knemeyer

AI, Big Data & Brand Loyalty, January 13, 2017

The issue is right now it’s dumb data, so Netflix for example, very intelligently can push to us what we’re going to like to watch, and figure out what things to make to determine what we’ll enjoy. Their ownership of that data isn’t that valuable. If I … It’s convenient that I’ve rated a bajillion things on Netflix, and that is all there, but if I left Netflix tomorrow for some new service, I’m not losing that much. The data for me as the consumer, it doesn’t do much. It helps them maintain their business model of getting $12.99 a month out of me, fine, but beyond that, it doesn’t have a more over-arching value. I’ll say the same thing for Amazon. Amazon purchases. Those are even stupider because I don’t think that Amazon, that’s probably really, I’m naïve. I’m sure Amazon is using those things to figure out how to push things towards me that I would be more likely to buy, but it still is dumb.

There’s going to come a time, and it’s decades away, not years, when a machine can interpret that data and can draw conclusions about me as an individual. Conclusions that would draw me to date better people, draw me to pursue a better career. Draw me to spend my time in ways that are better for me. Draw me to … Plans to work around my weaknesses, or to proactively work with my genomic data to have me doing things or buying things, or behaving in context to make it less likely I die at an early age. That’s when it’s smart and that’s when it’s interesting.

Right now it’s being levered for capital gain which is fine and good, but it’s just not that interesting. If I left Amazon or left Netflix, it really matters very, very little. There’s other places to buy products, and certainly there’s other places to watch shows. Now, if anything, that market is overly saturated with Hulu and Amazon and others. The point I want to make is that yeah, in these ways that allow a company to be successful in capitalism, it’s great, but in terms of doing really meaningful things or things that matter to me where I’d be like “oh my God, I’m going to keep my Netflix for the rest of my life, because leaving it would just be too catastrophic”, it’s just nowhere near that. To me, that’s when this data will become really interesting, is the decades down the road when they understand the human animal well enough to use machine power to translate the choices we make into really, really changing our lives.

Genomics for the Few, AI for the Many, December 22, 2016

The big difference between AI and genomics is that AI is going to be changing the lives of all of us really soon. Really soon. Whereas genomics is going to change the lives of the wealthy really soon, and may or may not change the lives of the rest of us any time soon. For me when I think of genomics in sort of juxtaposition to AI which we talked about before, that’s what I think about, is the fact that AI really matters to all of us. Genomics matters to the elite, and hopefully will matter to more of us over time, but I’m not counting on it anytime soon.

Whether it be me sort of dictating the kind of child that would sprout from the proverbial loins of myself and a partner, or my life extension or my life enhancement at some sort of large important scale, I mean those are simply things that will be reserved for the elite for probably many years after first becoming truly commercializable. It’s just a question if they can ever get to the point, or if they should get to the point of mattering for all of us.

Design Challenges for AI & Sensor Technologies, November 4, 2016

I think that the imagination runs wilder than the reality. I mean, one of the things like with the health room that we talk about is the possibility, for example, collecting specimens in the drain in the shower, right? I mean, then evaluating those. Well, if there’s sensors in the drain of the shower, how do those sensors get cleaned, right? It’s very exciting to imagine sensors, sensors, sensors in all of these different places but how are they maintained? How do they continue to function? Once you have this distributed network of devices, essentially, all over the place, as different devices in that ecosystem fail, how does that impact the effectiveness of the ecosystem?

It’s neat, and it’s especially neat in theory, like when you talk about, “Oh, it’s so cool, all these different things that can happen,” but in reality, we live in a world governed by the rules of physics and there are requirements, whether they be in terms of power, whether they be in terms of cleanliness from the standpoint of having an electronic device able to function in the intended way, despite being in odd circumstances as well as people’s just tolerance of interest for everything that can happen. It’s interesting, nanotechnology in general is interesting. I mean, smaller means accessibility, smaller means there are more things that you can do, but the potential of nanosensors in the short term, I don’t know. I think it might start to get more interesting in the medium and long term when some of the other related enabling technologies are improved, such as batteries, for example.

Artificial intelligence is the plumbing of our digital future so that’s just the reality, and so now we’re watching and adapting as we see the quality of artificial intelligence increase, so that it is increasingly able to permeate and to influence. Again, it’s going to be slower than we think in a lot of ways, but it is what our digital future will be built around.

It’s just so far away, and again, I’ll use Siri and Alexa as two examples of that. I mean, these are products that have a lot of money from big corporations put behind them, and are designed for consumer use. I find them both to be garbage, and this is years after they’ve been released and had the chance to be optimized, and how far away are those products from being wonderful? It’s years. It’s not decades, but it’s years. We’re just not there yet. It’s clumsy, it’s clunky, but it’s not there.

There are individuals for whom the novelty and the fun of exploring those technologies and growing with them is part of it. I want to live my life. For me, the technology allows me to live my life better, and as soon as you’re clumsy and clunky and stupid, you’re making me live my life worse. It’s just two different ways of looking at it but from a money making standpoint, people better treat me as their consumer as opposed to you. Because it’s my seeing it as good enough for my life, is it at a point where it could go mainstream, whereas you definitely are on that bleeding edge of tech geeks.


Driverless Cars, Ethics & The Flawed Human Animal, September 30, 2016

Tesla with their autopilot feature, they’re explicit to drivers, “Keep your hands on the wheel, keep your foot on the brakes and stay alert the whole time,” right? That’s one of the reasons why when we talked about the technology before I was very critical of it. I was like, “Why bother? Just drive your fricking car,” at that point. This individual was given those warnings, and despite those warnings, presumably through inattention or not having the foot or not having the hands, something didn’t override and keep himself safe, keep himself alive. It’s sort of an over-trust in the technology, like, “Oh, yeah, yeah, those warnings, those are just … They’re being overly careful.” It’s like 10 and 2, we don’t really do 10 and 2, that’s overly careful. The guy’s dead because of it, and that’s really unfortunate.

I’d be interested to know what’s happening on the litigation side. If the traffic light malfunctions and someone dies in a car crash due to the malfunction, can you sue the … I don’t know how all that is set up, but can you sue the city, or can you sue the engineer, or can you sue the manufacturer? I ask those questions because I think the highways are a great place to talk about all this stuff, because we have this illusion of control. Right now we’re driving our car, we crash, it’s someone else’s fault or it’s our fault. However that’s figured out, humans are blamed. We’re heading towards a future of driverless cars. In that scenario, it’s very likely that cars will be far more safe and less people will die on the highways. A lot of people die on highways, I don’t know what the number is. It’s certainly tens of thousands a year in the United States, maybe in the hundreds.

I don’t know scale, but it’s a lot of people die on the highways right now. If those technologies cut that number in half, objectively safer, objectively better. The people who die in those accidents, now they’re dying because something went wrong with somebody’s technology, it was my car’s software, or your car’s software, or something else other than my agency of me as a driver, you as a driver, and we’re taking responsibility for what’s going on. Now it’s something totally different. I’m really curious on the litigation side how that’s going to pan out. I think there’s going to be a lot of people that hate the technology because they were the unlucky lottery winners of their loved ones being killed. Less people die overall, but my person’s dead. If they were allowed to hold their steering wheel, they wouldn’t be dead.

I think those are knotty legal and ethical things that are going to be great strawmen, great first to the fight in how we’re thinking about all the implications of turning various parts of the world over to artificial intelligence.

We’re going to be in danger, because humans are careless, we are. There was one time, I don’t know, it’s probably been a year or 2 even, so I think it’s worse now most likely. I drove down the road and just said, “How many people are texting or on their device?” I passed 12 people, every 12 of them, every one, 12 out of 12, were on this device. It’s certainly under 100%, but that sample size is a perfect example of it. I say we are careless, because look, I’m on my device sometimes too, unfortunately, on the road. It’s been communicated to us, we know, “Hey dumb dumb, you are much more likely to die whizzing at a high rate of speed in this big, heavy metal thing if you’re doing that,” but we still do it. We make this little calculation based on incomplete understanding, incomplete … really grocking what the danger is. We thrust ourselves into further danger, for what?

For nothing, for the difficulty of being bored, for the draw of the little serotonin rush, of the little thing. That way of behaving is going to continue to haunt us moving forward. It’s why we use easy passwords, it’s why we’re not secure with our online information, while many of us probably, unbeknownst to us, all of our goodies are out there already and people could be using those against us, and leveraging even today if they really wanted to. We’re careless. That carelessness is going to add risk to the future of AI.

Home Automation & Consumer AI, July 18, 2016

You know, the technology will get more sophisticated and better, but I want it to be really powerful. I don’t want it to have these 5 nice things it does that I’ve memorized, and the only 5 things that I can rely on it for, that amount to things as pedestrian as, “Turn the lights down, please, Alexa.”

It’s this very flat, very limited amount of information, and additionally, it’s not giving context for how I live. I have no idea what the temperature is that leads me to layer, and then unlayer as the day goes on, or I don’t know where the temperature break is where I go from jeans to shorts. I don’t know that in terms of numbers. I know it when I see it, and it would be very easy to program the AI around this stuff, to learn from us, how we respond with simple questions. “Dirk, was it hot or warm or cold for you today?” “It was warm, Alexa.” Now, Alexa knows what I think of as warm. “Dirk, what sort of clothes would you be comfortable wearing today?” “Today felt like a jeans day to me, Alexa.”

Alexa can simply say, “Dirk, it’s going to be a beautiful day out today, however, I recommend you wear jeans, wear 2 to 3 layers, and maybe bring an umbrella, just in case.” That is where value is. If it did that, I would say powerful. If it just spits out “Sunny and 70,” I say bullshit.

Probably the best home automation that’s out there are from companies that nobody’s ever heard of. They don’t happen to have voice interface, the way Alexa does, which is very sizzling and sexy, but does very powerful things around home house control of room by room, not just room by room temperature, but music, coverings, window coverings, status, light status, what movies are being shown all around. That technology’s been out for decades now, and has done very well by workman like companies we’ve never heard of. The big consumer companies we have heard of are going to, I believe, are going to completely outflank them, probably buy them, and take up their infrastructure, and all of that.

AI & Creativity, July 7, 2016

I think the creativity is the special human things, just it’s a myth. I mean creativity is just sort of the manifestation of something unexpected, something less straightforward. That’s accomplished by some combination, of need that just forces spontaneous innovation, or just by people’s processor’s people’s mind and problem solving machines. Our internal machines operating differently or unless unnecessarily better where the creativity comes from people who are operating differently either due to the wiring and the piping that we have, and/or due to the way that they’re solving problems or the contact store, or a lot of different factors.

None of that to me is special or unique and it’s just a matter of time before the most creative pursuits that humans are able to express will be matched and exceeded by machines. There’s a lot I don’t know on the engineering side, so I hesitate to put specific dates on it. I know for a fact, like there are some things creatively, I’m really good at. I know exactly how I could communicate to an engineer, what the process is, what’s going on that gets me from zero to really cool funky, unexpected solution.

If I can do that, other people can do that. Once we’re able to translate that into commands for the machine it’s game-set-match, and I don’t mean that from like a scary, “Oh my god we are irrelevant” perspective, even though that’s one possible long term outcome. Just from the perspective that this creativity isn’t special, it’s not protected, it’s not safe. It might be further out before the machines are able to get there. It is going to inevitably come and so rather than fear it or rail against it we just have to think about what’s next for us.

AI & Original Works of Art, April 14, 2016

If you think about how art schools work, they focus first on foundations and skills and fundamentals and you begin to learn a lot of different ways to create. The AI is learning different skills and different fundamentals and different tools and ways to create. The interesting question is how far are we away from the AI creating things that are uniquely its own based on the synthesis of many different styles and skills that it has developed beforehand? It probably is decades, meaning low ten to twenty years, as opposed to three to four, but it’s coming. Not too far away. It’s definitely coming, and how will you feel then?

Art is … I don’t think it’s so much an expression of the world around us as it’s an expression of the context that we have been exposed to and how we process and react to it. It happens that as humans and the way we live our life as humans, the context that we’re exposed to encompasses a great deal of data that seeps into many aspects of the world, at least in some limited geographical, cultural context.

The amount of context that machines will have in the future is going to grow exponentially. Right now the context of expression of the world that a computer would turn into art is a mash up of Picasso and Michelangelo and Rembrandt, or whomever, without the greater understanding of the world and place and the many things that inform human art. It’s inevitable that that context will be a part of artificial intelligence in the future.

At the point it has all of that context to go with the skills, it’s going to be doing what we do. I believe in hard determinism, so I believe that the things that we do are an inevitable, and if we had machines powerful enough to do so, even predictable synthesis of basically all of our nature and nurture up until that point, and as we interact with the world itself. The idea of free will and the human spirit, I think that those are clumsy ways of explaining things that we don’t have computational power enough to understand.

I think that AI and machines are being built in the model and mode of humans but with computational power that will continue to be orders of magnitude higher than we ourselves have. They’re taking our place as operational actors in many different ways, from a technological perspective, is an inevitability. The question is will things be done artificially from a social, legal perspective to prevent that operational eclipse of humans guiding the world. To me that’s the big question and unknown.

As far as computers being able to create original art in this dramatic, idealistic, lovely way that we think of human artists creating, I think it’s asked and answered and it’s coming. It’s just how long will it take the nerds to take to program the machines properly to be able to do that? It will probably be faster than we think.

Localized AI Tools, March 31, 2016

One of the easy things that he was programming it to do was, if you put in a Minute clinic and a zip code, it will tell you what the wait time is at the local Minute clinic. In zip codes there’s multiple stores. There’s some issues there but what would really be powerful would be if it was truly localized to the individual. What I mean by that is, here at the studio we eat out at the restaurants that we have access to here in Arlington Center quite a bit.

What would be easy, what would be great is if it knew when we say Thai restaurant, it’s talking about one specific place. If we say the Burritos, it’s talking about one specific place. Suddenly, that thing with the local contact as a bot will be able to make a round trip orders for us in seconds. We type in what we want and then it’s all just done, but with the bots being designed as these global things, it can’t discern when I say Thai, that it means the restaurant to our left, not across the street and to our right. If I say pizza, it means the place that’s a lot farther, that’s really good as opposed to the place that’s closer and sucks.

There’s this gap with programming being done at the global level, we’re trying to write a bot that covers everyone everywhere in a generic way. The usefulness doesn’t get that deep. Where it gets deep is where it’s more localized, where it’s more specific to us in the examples I’m using just because of the very micro geography that we’re in and our taste preference as opposed to the more generic glom and so bridging that gap, is where a lot of the really exciting things will start to happen in the sort of software you’re talking about to really get at the local personal level and convert that into care as opposed to the generic and the macro.

Ethical Treatment of AI, March 1, 2016

I think people will figure [how to properly interact] with the human representations just like they’ve figured it out with dealing better with women and racial minorities. Certainly not that everyone does all the time, to be sure, but certainly in a public forum, in companies, in restaurants, in places where societies come together, the bad behavior is largely eliminated now. I think it will be the same with robots as well. It represents a continuing evolution of us as animals to treat things better, to not have expressions of rage, anger, disrespect, violence. Those are things that are part of our less evolved selves. Those are part of the Stone Age human, and as we become increasingly a different manifestation of humanity in a context that is wildly different, our behaviors will similarly come to be much more admirable.

There’s so many possibilities of the future, what different robots will look like, or how they’ll manifest writ large, but I think that at the end of the day, people will be treating them well just as a natural byproduct of our continuing evolution forward.

How AI Will Evolve UX, January 14, 2015

More frequent use of explicit design patterns, possibly expressed as open source software and libraries. Engineers often begin by using existing code that solves the problems they want to solve. It is taken for granted as the standard best practice, although once upon a time may have seemed threatening given all the time and jobs devoted to writing custom, one-off code. Remember when websites were all custom and unique? Today, few websites are truly custom, building instead upon templates, content management platforms, and e-commerce infrastructures. This practice will increasingly pervade the world of user experience, where we will agree on the best way to solve some of the easier problems and use those platforms to give us a leg-up in focusing on project-specific challenges.

Computers replacing people for completion of more incremental UX tasks and challenges. Recently a company called The Grid launched AI websites that design themselves. While their solution is unlikely to be the nemesis of the human-designed website, it is a shot across the bow to all digital designers that, yes, we, too can be replaced by machines. Realistically, the technology’s most likely impact on the domain of user experience is in automating the smaller-scale, incremental evolution of existing systems. A/B testing is now an accepted way of trying out small changes and design tweaks. There is no reason in much of that process for there to be any human involvement whatsoever. Sooner than we think, the human will be removed entirely. Perhaps a human designer will introduce an alternate design for the system to consider, but the machine can do the rest. Everything from preparation, to data collection and analysis, to deploying the winning design, to passing along metrics and explanation to stakeholders can and probably should be entirely automated. Soon this will be a reality.

UX skills becoming more core to the general toolbox of knowledge workers. Again, we can look to software engineering for the example here. The far-flung campaigns of “everyone should know how to code!” have led to an interest in computer programming that has become truly mainstream. While much of this is driven by perceptions around what the jobs of the future will look like, some of it relates to the idea that the ability to code is a potentially valuable life skill in the future wacky world of technology. Yet, I suspect both of these initiatives are poorly conceived. Computer programming will be more based in libraries and reusable code in the future, to say nothing of artificial intelligence increasingly generating its own code. There will not be a giant job market for all, and perceived benefits from having some light ability to code is unlikely to serve us any better than conveniences like, say, knowing how to change the oil in our car by ourselves. On the other hand, core UX skills will prove increasingly essential for knowledge workers, as crisp problem solving and creative thinking increasingly define the value-add that human participants contribute to the corporate system. Research skills are an obvious example, empowering anyone, from marketing flack to product manager to software engineer, to determine context with clarity. Then, problem-solving tools like cardsorting are particularly useful for product managers as well as engineers. Indeed, to this point, the need for dedicated UX people is largely the result of the skills required to provide UX not being widely understood, and/or taken care of by people with those titles. In the future, our knowledge and skills will be absorbed into the work of other actors in the system. For us to maintain a role in the process we will need to develop more and different skills. This might manifest as those with a true art and design background being the practitioners with a key ongoing role discrete from other product disciplines, or it might require our gaining much deeper and more scientific insights related to genetics, psychology, sociology, and neuroscience. The one thing that is certain is that being trained in things like contextual inquiry and information architecture won’t be nearly enough for UX to remain relevant in the future.

So, what does this mean for both companies and practitioners?

For companies, not much. These shifts are years if not a decade away. So long as you keep prioritizing and investing in the quality of your user experiences, as the broader environment evolves so will the way UX manifests within your organization.

For practitioners, the path ahead is a little more uncertain. If you are already mid- or late career there likely is not much for you to worry about. Keep on keeping on. For those who are more early-to-mid-career, now is a good time to think about your relevance in the years ahead. If you are a trained designer or artist and can create beautiful things, you will probably be just fine as you are. If you are more of the liberal-arts-trained interaction designer type, or a researcher and strategist, you should think about ways that you can evolve with a changing landscape. This generally boils down to either branching into other job roles related to UX such as product management, or getting educated in advanced sciences and technologies that relate to human behaviour. Given that UX people are generally curious and enjoy learning new things, exploring different ways to evolve your knowledge and skills should prove enjoyable. In any event, proactively exploring new frontiers will keep you relevant and ahead of the changes sure to come in the future.