Dirk Knemeyer

Internet Security & Behavioural Dysfunction, October 27, 2016

When I do have internet outages, I will tell you, it makes me feel like a comedy character in a dystopian sci-fi show. Which is to say, I’m there just like, “Start working, start working, start working.” It’s not like, you know I go home, “Okay, let me go and read some poetry now,” right? I don’t have this normal flexible response. I’m just like this automaton like, “I need you to start working again. I need you to start working again. I need you to start working again.” Which always makes me feel a little bit self aware, but it doesn’t change my behavior. This may be apropos of nothing, but I think the loss of the internet, I’ll just speak for myself personally, is pretty crucial at this point.

It’s always-on, and it’s immediate. Which is to say more than just being on. Whenever we want something it just immediately appears. It’s not like we make a request, the request goes away for a while, and then something comes back to us when it’s ready, right? It’s just always ready.

If you think about in the physical world, how do you protect against viruses? How do you protect against diseases? You need a safe room. You need to go in a place that’s totally cut off from the bad environment and the good environment, and you need to detox. Then you need to take that detox into the good environment, right? Internet is always on. It’s always in and out, and in and out, and in and out. There isn’t that notion, really, of the safe room. The safe room is required, I believe, to make things truly safe. To make things really … To have a chance even to make things bulletproof from hackers. That would by definition require not immediacy in response. It would require things being held up. Being taken into an environment where they could be scrubbed and cleaned and washed. In a world of AI, that starts to become more possible from a speed perspective. Maybe to solve it, there’s a lag in our relationship between sending requests out and getting the information. Getting the transaction back. That would be weird to deal with, right?

My perception is that a service like Netflix could be relatively immune from that. The virtue of that is it is streaming a chunk of information at you, right? If you assume that all Netflix engineers are not corrupt. If you assume there isn’t hacking going on inside the Netflix organization, they should be able to create a climate that is protected, basically. To take simple data requests from us that aren’t more sophisticated packets, then stream back this giant pipe of, “Here’s The Hunt for Red October.”

 


Driverless Cars, Ethics & The Flawed Human Animal, September 30, 2016

Tesla with their autopilot feature, they’re explicit to drivers, “Keep your hands on the wheel, keep your foot on the brakes and stay alert the whole time,” right? That’s one of the reasons why when we talked about the technology before I was very critical of it. I was like, “Why bother? Just drive your fricking car,” at that point. This individual was given those warnings, and despite those warnings, presumably through inattention or not having the foot or not having the hands, something didn’t override and keep himself safe, keep himself alive. It’s sort of an over-trust in the technology, like, “Oh, yeah, yeah, those warnings, those are just … They’re being overly careful.” It’s like 10 and 2, we don’t really do 10 and 2, that’s overly careful. The guy’s dead because of it, and that’s really unfortunate.

I’d be interested to know what’s happening on the litigation side. If the traffic light malfunctions and someone dies in a car crash due to the malfunction, can you sue the … I don’t know how all that is set up, but can you sue the city, or can you sue the engineer, or can you sue the manufacturer? I ask those questions because I think the highways are a great place to talk about all this stuff, because we have this illusion of control. Right now we’re driving our car, we crash, it’s someone else’s fault or it’s our fault. However that’s figured out, humans are blamed. We’re heading towards a future of driverless cars. In that scenario, it’s very likely that cars will be far more safe and less people will die on the highways. A lot of people die on highways, I don’t know what the number is. It’s certainly tens of thousands a year in the United States, maybe in the hundreds.

I don’t know scale, but it’s a lot of people die on the highways right now. If those technologies cut that number in half, objectively safer, objectively better. The people who die in those accidents, now they’re dying because something went wrong with somebody’s technology, it was my car’s software, or your car’s software, or something else other than my agency of me as a driver, you as a driver, and we’re taking responsibility for what’s going on. Now it’s something totally different. I’m really curious on the litigation side how that’s going to pan out. I think there’s going to be a lot of people that hate the technology because they were the unlucky lottery winners of their loved ones being killed. Less people die overall, but my person’s dead. If they were allowed to hold their steering wheel, they wouldn’t be dead.

I think those are knotty legal and ethical things that are going to be great strawmen, great first to the fight in how we’re thinking about all the implications of turning various parts of the world over to artificial intelligence.

We’re going to be in danger, because humans are careless, we are. There was one time, I don’t know, it’s probably been a year or 2 even, so I think it’s worse now most likely. I drove down the road and just said, “How many people are texting or on their device?” I passed 12 people, every 12 of them, every one, 12 out of 12, were on this device. It’s certainly under 100%, but that sample size is a perfect example of it. I say we are careless, because look, I’m on my device sometimes too, unfortunately, on the road. It’s been communicated to us, we know, “Hey dumb dumb, you are much more likely to die whizzing at a high rate of speed in this big, heavy metal thing if you’re doing that,” but we still do it. We make this little calculation based on incomplete understanding, incomplete … really grocking what the danger is. We thrust ourselves into further danger, for what?

For nothing, for the difficulty of being bored, for the draw of the little serotonin rush, of the little thing. That way of behaving is going to continue to haunt us moving forward. It’s why we use easy passwords, it’s why we’re not secure with our online information, while many of us probably, unbeknownst to us, all of our goodies are out there already and people could be using those against us, and leveraging even today if they really wanted to. We’re careless. That carelessness is going to add risk to the future of AI.


Humanity as User Interface, May 12, 2016

The whole wearables thing is just a transitional phase. Embeddables are going to be where it’s at. Wearables are going to be clumsy clunky junk.

When you let your mind sort of go crazy and explore, it seems like dystopia all over the place, but, I don’t think the technologies will manifest that way. The technologies can’t manifest that way, and here’s why. To take your example of employers, employers being up in your shit, every damn thing you do at work, it’s not feasible, and the reason it’s not feasible is we as humans are not robots. We are going to rest, we are going to take moments where we are not linearly kerchunking away like John Henry on the railroad on the exact thing the employer wants us to do right in front of ourselves. If that level of monitoring existed, it would spoil the relationship between virtually every employee and every employer everywhere in the world, and that’s not going to happen, so, yes, there are a lot of interesting questions about where this could go, what could happen, how it impacts us, but a lot of them wouldn’t even be manifest because they would undermine the very fabric of reality.

We’re a long way away … I shouldn’t say that because neuroscience is moving very quickly, but we certainly don’t have a coherent sense yet of, and of course, it would have to be different for each person because we’re all wired so differently, but, we don’t yet have a coherent sense of the optimal way to work is in four hour shifts with two and a half hours being kerchunking, and a half an hour being daydreaming, and fifteen minutes being a power nap. At some point, that kind of stuff will be figured out, but I think we’re a long way away from that and it would only be in the context of that deeper understanding of how the human animal optimally functions that that sort of analysis of how people are spending their time at work, what they’re doing really has any value. Until that, it’s just voodoo.

I’m picking on that one example to sort of push back against the whole waterfall of interesting thought examples you had of these crazy ways it could go. A lot of them aren’t going to go that way because it would be completely undermining to the basic systems and functions we have in place. The ones that we should probably be more concerned about are the ones that would be more at the level of the government, Big Brother, tracking. Right now with our cell phones, we can be tracked in pretty granular ways, probably more so than we realize, and maybe it’s even happening in ways beyond what my naïve little brain would allow for.

I don’t know that embeddables change the game that much. I think where I’m interested with embeddables, at the end of the day, our eyes, our hands, our mouth, and other parts of our body are part of a UI. They’re part of a user interface between ourselves and the outside world and we’re going to get to a point where those user interfaces are less important, possibly to the point of obsolescence because everything can be straight into our brain, into our central nervous system, into the neurological and endocrinological and psychological aspects of who and what we are, so, we’ll have direct mind-to-mind communication, be able to picture each other in fulsome ways from across the country or from across the world, to download not even the literal sense of how we think of download per se, but to download huge chunks of data and thought.

That’s coming, it’s not super close, but we’re on that path. That opens up a lot of real interesting questions because then the frontier becomes the brain, the frontier becomes the self. Right now, cyber terrorists or hackers are trying to crack our thumbprint. Right now, our thumbprint gets us into our phone. We’re also moving towards ocular technology, right. The technologies of high resolution, which you talked about before will make it trivial for someone to copy my eye-print. Somebody who is just way off, that I don’t even realize is there is getting a picture of my eye in a way that it could in a high resolution way reproduce it, and make sort of ocular authentication completely irrelevant. That’s all trivial and that’s all greatly coming pretty much as fast as ocular recognition technology itself comes.

To me, where it gets more freaky and more interesting is when the brain becomes the final battlefield, is when we move beyond the eye, the thumb, the lettered passwords to where it’s the brain is the true essential self that is somehow unlocking systems communicating externally. Our self representation in the world is largely from our mind and spirit, whatever that is or isn’t, and that is going to be the frontier of hacking and that is going to be the frontier of terrorism. I think that’s where really interesting stuff starts to come, and now I’m going pretty far down the road.

There is a lot of learning to do, and we mysticize and privilege humanity, but we really need to step back and deconstruct it and think that we are just an IO device. Our bodies are our user interfaces, and the fact of when the wind blows, it blows my skin which makes me feel something, which makes things happen in my brain, those are all things that science can get to the point of first understanding directly from wind hitting all the way through the totality of things that you think can feel in a certain way, but the next step is to replicate those things, and whether it be wind on the skin, or the things you’re hearing in your ears or seeing with your eyes.

At the end of the day, that can be chunked down into IO stuff, into data in and data interpreted, and data making systems fire within our system, and science is well down the path of figuring those things out, and, once it’s done, the sky is the limit. Science, technology, it’s been the applied technology that has really driven the digital revolution. The next revolution to come is one that is going to be driven by the science.


Hacking Synthetic Biology, April 7, 2016

If you think about, in the physical world, the notion of a virus, we’re exposed to real viruses, and they materially change our bodies sometimes in rapid and horrific ways. Once we created the internet and once we created the virtual world, so to speak, now humans create viruses that attack that, similar to the Ukraine power issue that you mentioned earlier and destroy those things. This is sort of like the third level. We’re changing the world in a way that code can change our bodies, that code could, in theory, operate as a virus within us. The technology’s not there yet, yadda, yadda, yadda. It’s coming.

What does that look like, and how can it be controlled? We sure as hell don’t control viruses on the internet. We certainly don’t control viruses in the hardware and software that we have today. That’s relatively easy to fix. You wipe your computer. It’s a day or two of hassle, and then you’re back and ready to go. If people can start rewriting the code inside of us in malicious ways, that’s nasty. This is science fiction at this point, of course, but because I’m not very well educated on these areas of science and engineering, that’s certainly where I go, not from the standpoint of like, “Oh my God. I’m so scared about this. We can’t let it continue.” I kind of take for granted that it’s coming inevitably. Of all the things we talk about, this is probably the one that I think is, for me at least, most alarming in terms of the potential consequences.


Hacked to Death, January 21, 2016

There will be nasty [hacking] events. The question is, will there be a few nasty events because things have been locked down properly in advance, or will there be many nasty events because we’re going to learn more from the school of hard knocks than we might like. The part of it that I’m most concerned about has to do with implantables.

The smart city stuff, there’s not a lot there that can kill people. That can have that type of an impact, but I’m concerned about the things that could take life by virtue of a virtual hack. We’re going to have to be very diligent in protecting against those because it’s going to be evermore seductive to implant things in or on our body for people with maybe diabetes to regulate our endocrinology, regulate endocrinology even for other not just diseases, but just conditions and states of being, or more advanced tools for regulating the heart. All these things that a hack could directly end up taking a life. We’re heading towards that.

I think there will be moments where hacks do take people’s lives. That’s where it gets really hairy.


Pre-Crime & Social Engineering, August 13, 2015

This is a great example of where big data is so powerful. If you have science, if you have an infrastructure that you can pour a lot of data into, that it has some validity to it, suddenly you can determine some really interesting and powerful things. One of those, potentially, is the likelihood of people to commit crimes again.

If you know that I have an 82% chance of committing another violent crime, for me, as a citizen, and considering human rights, and the rights of the citizens, I want me as the 82% future violator to be controlled. I don’t want the 82% future violator to be given that 82% chance to harm and destroy other people. That doesn’t seem equitable to me. My saying that, probably to a number of people listening to this show, is very controversial, which gets to the complexity of the issue.

I think it’s really interesting stuff, and we’re heading toward a whole new moral battleground that we haven’t had to deal with in our culture in the past. It’s coming fast, and it has the potential of doing much more good for the society, and for our citizens. Probably if we go down that path at the expense of individual rights, in ways we become accustomed to taking for granted. I’m fascinated to see how this all plays out.


Hackers Hurting Our Physical Selves, July 30, 2015

It’s a great foreshadowing of how we’re moving into a period where our connected computing devices are integrated into our lives in a way where they can be used to hurt us physically. They can used to hurt us for real, so hacking, until now, the limits of it were basically identity theft, which is not great. If you really had your identity stolen, there could be some big inconveniences and, depending on how you react to it, potentially big problems, but there’s nothing that can physically harm you directly, as if a weapon is hitting you.

Here, we have this exploit where somebody could take your car and, with the little picture they showed with the article, drive it right into a ditch. You could be killed by a hacker who gets into your device and gives it instructions to take you off path and put you in the way of physical harm.

This is just the beginning. This is going to be a lot more in the future, not less, as the devices are either integrated into us physically. By into us, I’m not talking necessarily about from a cyborg prospective, but just from touching our bodies, or from controlling things in and around our bodies that, if taken in a certain direction, could cause us harm. To me, it’s just sort of a warning sign for something that those of us on the inside have known is coming. This, now, is really showing it to the mainstream and saying, “Look at the potential of what can happen,” and again, it’s just the beginning.


Expansion of Individual Power to Harm, June 25, 2015

There’s a lot of power that each of us have as individuals. To hurt people or organizations would have very little to do with us. I mean let’s go back N thousand years, there was a time where the only way to hurt a person or an object in the environment was to be standing within arm’s reach of that thing and pummeling it in a personal way, either with your fists or with some kind of handheld object. Then through technology we have things like the bow and arrow, and on and on and on. And now we’ve created this virtual space which totally changes the rules, but we also have technology within it that can be mastered and learned by an individual but then to harm a scale like a country or the whole of the internet to a certain degree from a service disruption standpoint more so than a data breach standpoint.

But it’s a problem that’s not going to go away, the nature of computer science, the nature of computer security is such that an individual or particularly a group of individuals that are banded together officially or unofficially by a federal government or a federal group have the power to get into anything they want given enough time. And that’s the price that we pay, whether it be the U.S. government or individuals, for having our information on the internet. For engaging with this thing that is, by its very nature, global, virtual, and really hard to protect. It’s tough.