32 The Cyber Impact

Notes

Branka Marijan and John Daniele note the it is possible for certain hackers to get control of a ballistic missile and launch a nuclear war. Or control an electric grid.

Panelist: 

  • Branka Marijan
  • John Daniele

Video

SUMMARY KEYWORDS

system, privacy, technology, happening, John, concern, information, weapons, organization, attacks, talk, manufacturer, issues, cyber, important, kinds, companies, security, state, focus

SPEAKERS

Branka Marijan, John Daniele, Metta Spencer, Intro/Outro

SUMMARY

This is a conversation between Metta Spencer, Branka Marijan, and John Daniele discussing various topics related to emerging technologies, autonomous weapon systems, cybersecurity, and the potential risks associated with cyber-attacks. Branka Marijan is a staff program officer for Project Ploughshares, a research and advocacy organization based in Waterloo, Ontario. She is researching emerging technologies pertaining to autonomous weapon systems and the implications of relying on algorithms for decision-making in warfare.

The conversation highlights the concerns regarding the development and deployment of autonomous weapon systems, emphasizing the need for human control over such technologies. The discussion then shifts to the broader implications of emerging technologies, including the Internet of Things and potential cyber-attacks on critical infrastructure. Branka and Metta express concerns about unintended consequences and the difficulties in attributing cyber-attacks to specific actors. The conversation is poised to continue with John Daniele joining the discussion as a cybersecurity expert.

 

John Daniele discusses the concept of physical attribution, which involves gathering inside information about an organization’s operations through means such as planting someone inside, surveillance, or electronic penetration of their systems. He mentions Russia’s bold and aggressive cyber-attacks, emphasizing that pushing too hard and quickly can lead to mistakes and unintended consequences.

Metta Spencer and Branka Marijan express their concerns about the unintentional disasters caused by human error or the unintended consequences of cyber activities. Branka emphasizes that viruses developed by state-funded entities can escape beyond their original intentions and pose global risks. They mention the Stuxnet virus, which unintentionally spread and infected other systems, highlighting the potential unintended consequences of cyber operations.

The conversation then shifts to the topic of privacy. Metta expresses her frustration with the increasing difficulty of contacting people due to concerns about privacy and data protection. Branka acknowledges the importance of privacy concerns, particularly in relation to activist movements and the potential misuse of personal data. They discuss the impact of technology advancements and the extent to which personal data is used and shared in everyday life.

John points out that while some individuals may not prioritize their personal privacy, it should ultimately be a personal choice in a democratic society. He highlights the difference in privacy perspectives between regions such as North America and China, where the state may have more access to individuals’ communications.

Branka Marijan, emphasizes the need for improved digital skills and education to navigate a connected world, protect privacy, and make informed choices. Project Ploughshares aims to provide information to communities and churches, linking technological insights from experts like John with the political implications of cyber activities.

Metta Spencer raises her concerns about personal digital literacy and the expectation that individuals should be responsible for their cybersecurity. She prefers relying on the government for protection and questions whether there are legal and institutional solutions to safeguard individuals from cyber threats.

John Daniele explains the rationale behind holding manufacturers of information and communication technology (ICT) hardware and software liable for negligent security failures that cause harm. The proposal aims to address the lack of improvement in software security despite self-regulation by companies. By introducing punitive measures, such as holding manufacturers accountable for damages caused by negligent vulnerabilities, the hope is to incentivize companies to invest more in security. The funds collected could be used to bolster national security infrastructure, including educational programs, incident response, and intelligence capabilities.

Spencer raises concerns about the potential limitations of such measures, particularly in cases where cyber-attacks are used as tools of national aggression. Daniele clarifies that the proposed policy is primarily focused on vulnerabilities in hardware and software that result from negligence, rather than deliberate offensive actions. The goal is to address critical vulnerabilities in systems like the Internet of Things and operational technology, which have the potential to impact critical infrastructure.

Overall, the conversation highlights the importance of addressing cybersecurity risks, promoting digital literacy, and considering policy measures to enhance security standards and accountability in the ICT industry.

TRANSCRIPT

The following transcript has been machine generated using “otter.ai.” Prior to using information from the transcript, please watch the video to catch any obvious errors.

Intro/Outro  00:01

Welcome. This is talk about saving the world. A weekly series of discussions sponsored by Peace magazine, and project save the world. Every week, we joined some friends and experts at our respective webcasts to talk about how to prevent one or more of the six most serious global threats to humankind: war and weapons–especially nuclear, global warming, famine, pandemics, massive radiation exposure through something like a reactor explosion, and cyber attacks. Our host is the retired University of Toronto sociology professor Metta Spencer This transcript has been machine generated using “otter.ai.” Prior to using information from the transcript, please watch the video to catch any obvious errors.

Metta Spencer  00:53

Good evening, I’m Metta Spencer. and I’m in Toronto with a new friend, Dr. Branka Marijan. Did I say it right?

Branka Marijan  01:06

Yes, yes.

Metta Spencer  01:06

Branka is a, somebody I’ve recently discovered. She gave a talk at a very interesting conference that was held about a month ago here in Toronto by World BEYOND War, which is I suppose, mostly a US based organization. But they came to Toronto for their annual meeting. and we had a very good time because they brought a lot of people that I hadn’t met before and even introduced me to Canadians whom I should have met before, but hadn’t and that includes Branka. Branka works as a staff, program officer, that’s the word for it right? For Project Ploughshares, which is a research organization and an advocacy organization based in Waterloo, Ontario, and which does a lot of important work on some development issues, but primarily peace issues and disarmament issues, and she says she’s been working there three years. So I should certainly have met you before Branka, and now we are acquainted because she gave a talk about her current interests, which have to do with a she her official baileywick or beat as they would say in journalism is research on emerging technologies. And she’s gonna tell me how she drifted into something a little more militaristic. Tell me about your work Branka, hello.

Branka Marijan  02:52

Hello, Metta, it is so great to hear you and for me to have this opportunity to speak with you about sort of the type of research that I do at Ploughshares. So I came into Ploughshares in 2015, after finishing my doctoral studies, and I was looking at, sort of I was asked to look at emerging trends in warfare, and if there were sort of any issues that I identified as being important for sort of a peace oriented, you know, audience to focus on. And perhaps surprisingly, I became very intrigued by the discussions on the implications of emerging military and security technologies. I think up until that point, I had been aware and sort of as a personal interest had paid attention to how technology was transforming sort of, you know, politics as usual. And I thought it was really important for us to start focusing on this in terms of the changes in the humanitarian field, but also in the types of, you know, types of weapons that were being developed and the surveillance technologies. I thought that these were sort of important topics for people to be aware of, and to track and monitor. And so this is how I guess I drifted. I drifted sort of from being really focused on post conflict societies and peacebuilding to looking at, you know, how the kinds of weapon systems that were being developed. And one in particular was really concerning to me. and that was the issue of autonomous weapon systems, and this idea that we were, you know, like you know, outsourcing decisions over human life potentially to algorithms right?

Metta Spencer  04:45

Oh, now you’re talking about killer robots already?

Branka Marijan  04:48

Yes, yes. Yeah, I think people know sometimes you say killer robots, they have a perception of what that means, and it’s usually shaped by Hollywood films. And and yeah, Skynet and Terminator, but that’s not at all what we’re talking about. Actually, you know, something like 12 countries have over 380 weapon systems, which are semi autonomous. We have, we’ve seen a lot of sophisticated weapons technology, which could potentially be worrisome because of the extent of, you know, the extent to which the question is really the extent to which there’s human control over these weapons systems. And, of course, you know, hacking and all these things are a concern, when you have, you know, weapons systems, which are far away, your communication link to them might be vulnerable. Or perhaps, if you, you know, countries see a strategic advantage in sending weapons that they do not communicate it with anymore, after a certain point in order to avoid it being hacked.

Metta Spencer  04:50

You jumped right in too.

Branka Marijan  04:52

So let me think of some cases of that kind. The first thing, of course, that comes to my mind would be ICBMs. Because once they’re launched, you don’t, you can’t bring them back, you know, but you’re thinking of some other type of weapons that could be launched, and then you stop communicating with them, although you could? Although you could, yeah, at times, I think, yeah, just to be clear, I mean, at times, countries might decide in certain environments, if they are really worried about hacking and other things, then they might decide not to, to no longer be communicating with that system at a certain point, in order to avoid, you know, detection, in order to avoid it being, you know, differently than they intended. So that, that’s one concern, and that’s where really this sort of question of the security of the site, you know, these cyber enabled systems is really, I think, important. But that’s, that’s sort of one subset. I mean, autonomous weapons are a big part of what we are focused on, because we see it as an example of this over reliance on algorithms. And this belief that some of this technology is necessarily better than humans, and are is perhaps more neutral. and everything that we see, you know, shows us that machine learning and artificial intelligence is only as good as the type of data that it has. and often the systems are trained on historical data, which is biased and tends to discriminate. So if you can think of, sort of recent cases where of, you know, predictive policing, where some of these technologies were used, only to draw more attention to already over policed areas in minority communities because the data being used to make these assessments is historical data. And so I think they’re sort of very important questions that have emerged in our research and our work on autonomous weapons that relate beyond these particular systems, but actually are quite relevant for our discussions of how we want to see emerging technologies, shape, you know, warfare. How we wanted to see shape, the, you know, sir, law enforcement. and so I think these are sort of interconnected and important questions.

Metta Spencer  08:38

You have jumped all all the way over the human interface, haven’t you? To having machines that kill people without being controlled by people. I, my level of questioning is much simpler. I’m not even attuned to the issues of how to control weapons of that kind, except to keep them from being produced. And indeed, we already had one talk show a few weeks ago, with Aaron Hunt and Yashua Moser-Puangsuwan about their trip to Geneva to a meeting of the conference on certain conventional weapons, which ____in charge of regulating if anybody is, is in charge of regulating the future acceptability of lethal autonomous weapons. So we, that is certainly a scary thing, but I can be scared by much simpler things. I get real goose pimples thinking about what would happen if, as I understand it’s perfectly possible to do. If some enemy state decided to shut down our electric grid, or are the threats of the, what is called the Internet of things coming up. Even I think somebody today mentioned that your vacuum cleaner could spy on you.

Branka Marijan  10:22

Yeah, exactly. So your Roomba could potentially spy on you, and could, you know, it could let people know what the inside of your house looks like. And so malicious actors, you know can really use these technologies in ways that they were not intended and right…

Metta Spencer  10:42

But that would be still on an individual level, right?

Branka Marijan  10:46

Very much so.

Metta Spencer  10:46

____only be some thief who wants to break in and who might have got smart about figuring out what room has the, the window that’s unlocked or something.

Branka Marijan  10:57

Exactly.

Metta Spencer  10:58

But that’s something on a mass scale, where somebody turns on all the electric stoves in North America, or a you know, all of the made by GE let’s say. And turns all, unlocks everybody’s front door, and turns on the thermostat in your, your furnace, and so on, and burns down everybody’s house. Or if you’re out on the road in your new electric vehicle, it decides to stall in the middle of the 401, and everybody else’s car starts in the middle of the road. That is really scary to me, and what I want to know is the I’ve heard that that’s coming.

Branka Marijan  11:47

So yeah, I think that the attacks on critical infrastructure are, you know, the most serious and worrisome ways that we could see some of these cyber attacks play out with, with very real and tangible sort of impact on everyday life. And I think that that is an incredible concern and I think you’re absolutely right. Sometimes we may be over focus on weapons, when the Internet of Things and all these things being connected, allows malicious actors the ability to you know, sit, you know, halfway across the world and cause a great deal of destruction. And particularly, if you think of, you know, how viruses, as was the case in the UK of the National Health Service, I mean, it can be something that is an unintended consequence, right? It was a virus that was impacted certain computers was carried around the world. It wasn’t targeted at, you know, that.

Metta Spencer  12:55

Is that right?

Branka Marijan  12:55

Yeah, so it wasn’t targeted.

Metta Spencer  12:57

Nobody intended to shut down in other countries or anything, they were just trying to get the health service, or what?

Branka Marijan  13:03

They were trying to do. It was actually we, the belief is that the attack was particularly targeting the Ukraine, but then it kind of spread around the world. and because a lot of these systems being run public institutions might not be as off ____updated as often, you know, there might be things, you know, software updates, or maybe performance, they should be there isn’t what is called the basic, you know, cyber hygiene. And so the expression there, they’re particularly vulnerable, and so, you know, it’s not I think we, this is the other point I want to make, I mean, we focus on things that are intentional attacks, but there is sort of the unintentional and the unintended consequences of certain things that spread because of the nature of the cyber sphere. So you cannot contain these things, right? We treat them like viruses, maybe cannot contain them to a certain area, because everything is so connected, and that is probably, you know, a huge worry. I think we, I think in Canada, there is an awareness in, you know, in most Western countries, about protecting critical infrastructure from cyber attacks. But I think the more worrisome and the more difficult to respond to, are sort of these unintended consequences.

Metta Spencer  14:33

Why would you say it’s more worrisome? You know, that’s I think that’s a good question. Is it that you think that it’s more likely that we will be the victim or the inadvertent you know, the collateral damage from, from an inadvertent attack or, the somebody’s stupid mistake that accidentally starts a problem that nobody really intended to do. How would you estimate I mean, if you say that’s true, I don’t I don’t disagree with you. But I wonder how you know that?

Branka Marijan  15:13

Yeah. So I think it’s sort of an I’ve heard Professor Stephanie Carvin from Carleton University speak about the, you know, the Normal Accident Theory, where, you know, a chain of events happens, and it’s really difficult to say, what cause you know, what led to that particular accident happening? There’s a whole set of things. and I think accidents happen, I think we don’t foresee certain sort of, you know, accidents happening, because we’re so focused on sort of more intentional things happening. So my, yeah, I think the reason I’m more worried about these unintentional ones is because what we’ve seen, at least so far with cyber attacks is that it’s been kept below a certain threshold, right? Kind of if if they are state-led or state initiated attacks, it’s been very difficult to trace some of these attacks back to particular states, and that’s because some of these states use individuals and find individuals who don’t maybe necessarily directly know who they’re working for. And so, I mean, they likely do, but the point is that it’s that. that kind of situation has been kept below a certain threshold. What  let me question

Metta Spencer  16:43

Wait, I mean let me question you on that, because I think we may be joined in a few minutes by John Daniele, and who is a cybersecurity guy who works mostly, I guess, trying to protect outfits like banks from you know, there always being a here’s a note from John right now. And he’s saying something which I can’t read. I’m sorry, I’m no good at this. Let’s just assume, lets see, are you here, John? Okay, maybe I can promote you to. Here he comes. Yes. He’s gonna be with us in a second. I was just about to. Hello, John. Are you there? Hello John.  Hi. Oh, okay. Very good timing, I was about to refer to you. But before I do, let me introduce you to your co panelist and the rest of the known world, at least Western civilization. And that, and this is John Daniele, folks, and in the other little postage stamp image that I have on my screen is Branka Marijan.

John Daniele  17:25

Yes. Pleased to meet you.

Branka Marijan  17:59

Pleased to meet you.

Metta Spencer  18:00

Our cyber experts, and I expect you’re going to have more to say to each other than I could dream up to, to ask Branka. But I was about to call your, user name in vain, John, by questioning something that Branka said, which is, she says it’s often hard to find out who did some malicious thing to track down the evil doer if there’s someplace far away trying to disguise their identity. And I said that I’d heard you talk about how easy it is to find some people don’t want to be found. So could you and Branka have a little conversation about how hard it is, or how much we really do know about who’s doing bad things from other countries?

John Daniele  18:50

Well, I think Branka is correct in saying that investigating these kinds of state sponsored attacks is very, very difficult. There’s, there’s the issue of using cutouts, so individuals that may or may not be willingly involved in committing a cyber crime or some sort of offensive activity. There are proxies that are often used. Iran is a country that often makes use of criminal proxies and engaging in cyber offensive activity. They also manipulate the computer system, so part of launching a sophisticated cyber warfare type campaign involves breaking into individual computer systems, televisions, anything that’s connected to the net, that these perpetrators could filter themselves through. It is possible, however, from a technical perspective, to look at different kinds of tactics, techniques and processes that they use and come up with a bit of a fingerprint for a particular campaign, tying it back to a specific state is incredibly difficult. That’s that’s sort of referred to as physical attribution. So technical attribution is, is possible, you can actually fingerprint the kinds of activities that a particular threat actor group is what we refer them to, and you can say, okay, well, these groups typically go against these targets and use these tools, etc, etc. But physical attribution costs a lot of money. It involves covert operations in the real world as well as operations in cyberspace. So very, very rare that you would get absolute physical attribution of a particular specific campaign or attack.

Metta Spencer  20:46

Okay, let me ask this about that troll Factory in St. Petersburg that everybody knows about. Okay, was that hard to find? Was that a real secret and did they try to keep it secret, and was it hard to keep secret? Or how did you? How did everybody get to know about it?

John Daniele  21:05

So it’s hard to tell exactly what isn’t known about that entity. But that entity is sort of like a special purpose entity, which is a seemingly private organizations that’s in service to the state, many countries around the world have these sorts of special purpose entities to engage in covert activity around the world. The Internet Research Agency, at some point, had to have a physical presence in order to buy campaign ads, they would have had to have some sort of registration that they would have signed on to, they would had credit card numbers, etc, etc. So it’s, it’s it’s difficult for us to know exactly how Facebook uncovered the specific campaigns that the Internet Research Agency ran, presumably, they probably had many, many different kinds of covers, that they were using multiple credit cards through distributed kind of system. But suffice to say, at some point, they had to have a physical tie in. And I would presume that perhaps the way that their activity was uncovered was through analyzing the financial transactions involved behind the campaigns that were paid for, that’s probably my best guess, it’s an educated guess, as to how they might have been able to tie together various different campaigns and say, okay, we know this one is the internet research agency, because we’ve done all this sort of covert work to tie back there, and all these other transactions are actually quite related. So that’s probably how they did it. But it’s, it’s almost

Metta Spencer  22:39

So that’s what you call physical attribution, when they probably some sent somebody out to actually physically look at property or something?

John Daniele  22:48

Well, it’s at some point, they would have had to find some way of getting inside information about that activity. So either you plant somebody inside the organization that reveals information about their operations, you surveil them, or you electronically penetrate into their systems and monitor them with digital surveillance. That would be the physical attribution part.

Metta Spencer  23:14

Now, now, Branka and I also had a little question, or she said something that I also questioned a little bit in that she thinks that maybe the worst danger is not deliberately malevolent action on the part of a foreign country. But inadvertent disasters caused by I guess, suit human stupidity, is that am I representing your point of view?

Branka Marijan  23:43

Yeah.

Metta Spencer  23:43

And whether or not I’d like to know whether this is a commonly accepted perspective.

John Daniele  23:50

From my perspective, I don’t think it’s it’s so much boiling down to human stupidity, or incompetence, per se. But if you take a look at Russia, as an example, Russia has, has been very bold in the kinds of attacks they’re launching, they’re getting very, very aggressive. And when you push that hard, eventually mistakes are made, things are forgotten about, there are threads that you have pulled that you forgot to clean up. So eventually, that kind of activity if you push hard enough, fast enough, or you try to do things at a much cheaper budget, which is what Russia is really strained right now for resources. I think, I think this is when you, when you have mistakes that are then unraveled by other foreign intelligence organizations running counterintelligence operations. And I think that’s probably what happened in the case of the OPCW hack in

Metta Spencer  24:51

Oh.

John Daniele  24:52

…with the the office of the chemical weapons.  Yeah. So I think I think probably what happened there is they tried to push too hard too quickly on a budget that was smaller than they should have allocated to that sort of operation. and, you know, the tools that they used were probably no cost them, probably no more than $2,000. There are tools that are commonly used in the cybersecurity sector by white hat hackers to test the security of corporate systems, etc, etc. So there were by no stretch of the means sophisticated weapons of cyber intrusion. But, but yeah, the whole operation overall was less clean, and I think that’s probably because they pushed too hard too quickly.

Metta Spencer  24:56

Yeah. Okay but now, you’re both the question here is you’re focusing it seems to John on state run, surveillance and hacking and, and malicious activities. And I’m not sure, listening to Branka, how worried I should be about the prospect that Mr. Putin is going to get me, as opposed to the notion that somebody’s going to leave a screwdriver in the wrong place, when setting up a computer someplace that will accidentally turn on all the furnaces in North America, and burn us all up, et cetera. In other words, her notion, Branka, you want to defend your point of view here, if I misrepresent it?

Branka Marijan  26:35

No, no, I was just making the point that sometimes the viruses that are developed by what we in the end, believe our state sort of funded or supported entities, can sort of, you know, escape beyond what their original intention was, and that they will have unintended consequences, right. So if there is a, what initially started as a deliberate attack against a particular state or a particular institution in another country, then can turn into sort of a global concern and a global issue because that, you know, virus, for example, referring to viruses, because that’s been the experience over the last few years, but then that can transfer to other countries that, you know, were not intended as targets of it.

Metta Spencer  27:23

In the case of Stuxnet, which got loose and infected other people, right, I don’t know the story, but I understand there’s a movie about it.

Branka Marijan  27:31

Yeah.

Metta Spencer  27:31

Is that right?

Branka Marijan  27:33

Yeah, and it’s spread right and that’s, I think that to me, seems to be I think what I like about John’s point is that, you know, you know, if you push hard enough at some of these troll factories, and all these sort of malicious acts, you will kind of reveal your cards a little bit and people will start paying attention and but it’s these unintended sort of things that worry me, particularly because I think that you know, that this sort of connectivity, the level of connectivity that we have can be particularly. Yeah, that’s why I’m worried. I think sort of the unintended consequences are worrisome.

Metta Spencer  28:15

Okay. Yeah. I guess I don’t know how worried to be about what? Please tell me what I should have tonight’s nightmare about. Because I certainly do sometimes have fears about the coming of the International Internet of Things. When everything is hooked up to everything else. I am now reading or I was reading until a few days ago, when I got distracted a book called something like, Click Here to Kill Everybody. Have you heard of that new book?

Branka Marijan  28:48

I haven’t read it. But I’ve heard of it. Yeah.

Metta Spencer  28:50

Yeah and basically, he has a concept, which is something like internet plus or something beyond, sort of above and beyond ordinary internet concerns, where he talks about how in fact, everybody is connected to everybody else. And you can’t really detach yourself from, let’s say, from all of your friends mailing lists, on their computers. So if somebody gets them, they’ve got you too, babe.

Branka Marijan  29:28

Yeah, I don’t know maybe John has more experience with sort of the technical side of this. But yeah, I think that is, that is the issue really, I mean, even if you don’t want your images to be posted, let’s say on social media sites, it’s very hard. You have to be really, you know, on top of making sure that your image isn’t circulating on Facebook or Instagram because your friends will post these photos that you’re in. And so this, this whole notion of how much you consent within the cyber sphere is I think interesting.

John Daniele  30:01

One interesting phenomenon is, is, is that criminal organizations are now starting to look at not so much where you have a digital identity, the where you don’t have a digital identity. So you might have a LinkedIn account, but you may not have a Facebook account. And what individuals are finding is that criminal organizations are creating Facebook accounts with your photos, filling it out with your identity, and pretending to be you and going after your own social network. So all your colleagues who are on Facebook, are then being linked up to your fake Facebook profile, they’re interacting with them. and eventually, they’re going to try to get money out of these individuals in various different ways. Or use that information or use that access to exploit, exploit your your life and your social network. So I think that is something that is now becoming a little bit more prevalent. So not only do you have to worry about protecting your digital identities and accounts, you may also have to consider, you know, where you don’t necessarily have an identity or an account and whether you should create one just to, just to let everybody know that hey, this this, this one is me if any other ones pop up. It’s not it’s not me so it’s getting quite complicated.

Branka Marijan  31:21

And you do get requests from people you’re already friends with, right with their photo and their name. And but you say but you know, but I’m already friends with you like, why are you sending me another request? And then you learn that, you know, their account has been compromised, or their information has been stolen, and so I think that’s yeah, that’s another sort of layer to that discussion about, you know, how secure we are and, yeah.

Metta Spencer  31:47

You know, I don’t have any decent sense of privacy. I really, I’m so transparent about it, almost anything you want to know about me. And I don’t understand why I should constantly be wary, wary of appearing on Facebook or allowing personal information to be accessible. So all of this flap about how, what was this, and Cambridge Analytica steals all this data. And I understand I don’t want my information to be used to help elect Donald Trump. But, but other than that, I don’t know how, I think that it’s a wonderful blessing, to be able to get in touch with people so readily, or it used to be, but in fact, it’s less and less easy now. And I had to, I did organize a big conference in the spring, took me six months work, it was the hardest job I think I’ve ever done, because I did it with my one assistant, and that’s it. But it was, and I have organized conferences before, three of them, and this was way harder. And it’s harder because people don’t answer their emails anymore. They don’t want to be available, or let you have their phone numbers, or their email addresses and, and their, you know, you just have a terrible time, contacting the people that you need to contact to be speakers or whatever you want to do, or even invite them to come and attend that kind of information. Now, I really wonder how much we really need to worry about privacy, and how much of that is an overreaction of some kind? I don’t know you can give there’s no quantitative answer either one of you can give me but you know, frankly, I get mad. I get mad when I can’t contact people because they won’t let me have their email address, and, and if you try to contact an organization, now all you get is a write back box where they say enter your message here on our website, and we’ll get back to you, and they never do. So I would I would estimate, honestly, I truly believe that if we were possible to calculate the productivity of Canada, now and in and factor out how much of it is is related to this? I betcha the productivity of Canada is adversely affected by this new concern about privacy because it’s wasting a huge amount of people’s time when they, it was wonderful to be able to contact each other back 10 years ago with email and it sure isn’t easy anymore. Having delivered myself of some burning, scorching resentment, I hereby invite you to refute me.

Branka Marijan  35:00

I think they’re, you know, the concerns about privacy are coming from a, you know, a real place. I think that activists, people who advocate for human rights, depending on what you do, I think you have a certain concern about, you know, the ability of actors, and in some countries, the actual country, tracking your activity and monitoring your activity and being able to prevent you from communicating. So I think there’s, this has come from, you know, that the it is harder to reach people, and it is harder to get people to answer their emails. And I’m glad I’m not the only one who deals with that. But I think it does, there is a real importance to the concerns and questions about privacy, particularly as we try to make everything smart, you know. In Toronto, we want to have a smart city, you know, we want to have a Smart Approaches to healthcare, everything, there’s there’s this focus on using that kind of personal data in ways that we might not want it to be used. And I think that’s where these questions are coming from. So for example, think of, you know, the Fitbit, or, you know, fitness apps on your phone, these could then be used by insurance companies to deny, you know, insurance to you and that’s, you know, that’s not fair. And, you know, the so that’s where I think a lot of these concerns are coming from is how is this data being used by whom, and for what purpose?  And I think the sheer, the ability of and the I think maybe John can address this, but like the sophistication of some of the attacks, the targeted attacks on individuals, and the exploitation that can happen with, with this kind of technology, and how far it has advanced and how much we use it in our everyday lives, is why we’re seeing a lot of people start wondering about, you know, what kind of limits to set and, you know, in ____right you want to acknowledge that you’re that you’re being tracked on a website. And so I think that’s, that’s all coming from a very important place, and that this whole question about privacy really needs to be thought through and the extent to which it can be possible. And then the extent to, as you mentioned, that I mean, the extent to which it’s actually useful, or how it then it ends up impacting our interactions and our use of technology. And I do think that tech companies are aware of the level of distrust in technology. And I think that this is an incredible concern for them as well. They might not appear that some of them are paying attention. But I think there is a lot of the stress of technology also, that happens as a result of all these concerns. Sorry John, I kept going on.

John Daniele  37:59

I think when we deal with issues of privacy, you know, probably 90% of the population doesn’t care as much about their personal privacy, they’re sharing information on Facebook, they have pictures of their kids online that they share. They’re not not everybody necessarily goes into the Facebook privacy controls and locks things down. But the, the point about privacy, I think is it has to come down to a personal choice, we live in a democratic society, there are certain democratic values that we have, and the idea that the state, as an example, would, would have unfettered access to information about your private life and how you live your private life. It’s just not something that’s acceptable, particularly here, in, in North America. Other places around the world have varying degrees of levels of privacy, different ideas about privacy, you know, privacy in China is viewed in a very different way than privacy here in North America. The difference, however, is you have a personal choice to share your information. and the government doesn’t just have unfettered access to everything. In China, there’s much more unfettered access that the state has to your communications in general. And that changes the dynamic and it’s not so much that, you know, maybe you even trust your government, you have nothing to hide from your government. But what about other governments that would like to break into those systems and use the surveillance collected from within a country’s own environment for some other ulterior purpose or motive? What if criminals can access the information that governments collect? What if criminals could access those surveillance networks, and have unfettered access to your private in information. These are the complicating factors that I think, come into play, that we have to seriously consider. And that’s why I think that privacy is important, and I think that companies who develop technology need to be much more mindful of how to build privacy into a system. You know, today, we demand from our vendors, good security, that I’m using a piece of software, that, you know, I’m not gonna get compromised by using the software. But there still isn’t quite the same level of commitment to developing technology that has good privacy controls, you can have a very secure app with zero privacy controls, but you can’t have good privacy without security. And…

Metta Spencer  40:45

Wait a minute, back up. I’m not sure I understand that. That your point, privacy and security Say that again.

John Daniele  40:53

So there’s an intrinsic relationship between privacy and security. Good privacy depends on good security, specifically, encryption. Good security does not necessarily require or doesn’t necessarily imply good privacy controls. So I, your your information might be secure. But on the back end, there might be data and statistics that are generated on the basis of your use of the system that’s being sold to any third party company that wants to buy it. I mean, that that’s a, that’s a grievous privacy issue. But it doesn’t necessarily imply that the security of your data is not maintained, certainly the security of the data is maintained. But how that information is used, it does not necessarily denote good privacy controls.

Metta Spencer  41:46

Okay, let me ask a general question. They’ll ask something later, about our specific plank on our platform. But just in general, from all this, I don’t know how much to worry or how much how, how much risk I am in as an individual, or how much you or anybody, the people next door? How how life, how risky is life now for us, because of these threats? How dangerous is our situation, personally? And then how dangerous is it to national security? That how many risks are, are really things that we should be worrying about? How much is being overblown and being too paranoid, and, and so on, and how much is real, both at the personal level and at the level of, of the nation?

John Daniele  42:51

Well on the question of national security, I tend to be much more alarmist now, at this stage of my career than I have been probably at any other point. I’ve got a 20 years of experience in running forensic investigation. So being called in, after an organization has had a breach and trying to understand what happened and, and how to fix it and stop that kind of breach from happening again. And in my view, we are hurdling towards an ever more connected world. And the technologies that we use and the underlying frameworks that they’re plugged into, are not getting much more secure. Certainly, there’s a lot that is that is better than 10 years ago, but there’s a lot that stayed absolutely the same. And, and I am increasingly concerned about large scale cyber war as a reality. You know, after having worked in a number of different government organizations dealing with state sponsored breaches, there’s a lot that is happening behind the scenes, it’s never ever, ever reported in the press. And the, the scale at which this sort of offensive cyber activity is happening by, four nation-states is something that’s, that’s quite alarming to me. The devastating nature of what they’re doing is just ever increasing. So, here in Canada especially, most organizations, most companies and even the government, I don’t think treats these issues as seriously as they need to be treated. You know, I’ll you know, one thing I can say that’s, that’s rather provocative is if we take a look at some of our biggest industries in Toronto, like the financial services industry, and we take a look at subset of that industry, like hedge funds, private equity companies. You know, I can safely say that I think that the large majority of private equity firms in Toronto have been penetrated by nation state actors. They’re there, they’re lurking in the background, and these organizations have largely no interest in spending a lot of money to do anything about it. This past summer, my team was consulting with a $1 billion private equity firm that had significant challenges across the board. And it became an existential threat to the organization, and we’re seeing a lot of damage behind the scenes as individuals who are responding to these kinds of cyber breaches, whether they get reported in the press or not, there’s a lot of damage being done. There’s a lot of money being lost. You talk about a loss of productivity and a loss of IP, Canada’s particularly vulnerable. We we have a national security strategy that is only a few years old, and it’s it’s, it’s, it’s in its in its infancy, really, I mean, we are as a country, as a nation, well, behind the eight ball in comparison to our counterparts in Europe, far behind the ball and compared to our counterparts in the US, in my opinion. That doesn’t necessarily imply that we don’t have the capability, we have a lot of incredibly smart, talented people that are working in government and in the industry in Canada that know how to deal with these issues. But we don’t yet have coming from our civilian governments, a strong cohesive strategy.

 

Metta Spencer  46:45

Now, let me think of how that connects to Branka because your work Branka is is with a Project Ploughshares, a church based organization peace, peace research and peace advocacy outfit, and you’re doing research on these issues. What are you gonna do with that knowledge? How are you going to apply that, and what why should I be? Why should I want you to be on the job tomorrow?

Branka Marijan  47:17

Right, right. So I agree with John, I think a lot that there is an understating of the impact. So some things that happen in the cyber sphere. and my concerns are primarily that this will escalate and, you know, in lead to actual conflict, right? So not just in the cyber space, but in kinetic, you know, have kinetic action. And I think, why should we care? I think this is one of the things sometimes that people ask me, you know, why is someone from a peace oriented kind of research organization talking about these issues. And I always say that we have to be much more aware, we need much better digital skills, I think, in the country, all of us. I think this is something we have to get better at doing in our schools, we have to teach, you know, individuals in Canada from a young age that you know, how they are going to, you know, exist and work and live in a world that is connected.  How they can protect themselves, how they can protect their privacy, how this notion of choice in the sphere is important. And one of the sort of the key goals that we have as a research organization, is to provide this information to you know, the community and to the churches, and, and they are certainly aware of them and are interested in understanding it and are always asking questions about how, you know, what is the impact of these technologies, on you know, conflict zones that they might have humanitarian work in? And, and I think we can kind of bridge those, these two worlds, we can show them what are some things like experts, like John can tell us from a technological point of view are important. And then we can link it to sort of the political, you know, to the political sphere and the implications of this right? And and I think that is why and we, that’s why I essentially do the kind of work that I do, because I do fundamentally believe that technology is changing warfare on the ground for people in places far away that you know, are experiencing some really terrible things. And it is really our role to kind of speak up and show what are these things that are happening? And what are these developments that we need to be paying attention to? So I think we cannot ignore, I think there might be a tendency to be a little bit overwhelmed in your earlier question, you know, like, what should we be worried about? There’s so many things to worry about, particularly in today’s political climate. But I think it’s really important to understand with some of these technologies that nothing is inevitable about them. We have tools, we have ways to regulate them, part of the challenge…

Metta Spencer  49:58

Oh no, really now there’s why I wonder, because so far, I’ve had this awful feeling that now, you’re going to put the responsibility for my digital literacy on me. And frankly, I am not competent to be digitally literate. I know that I’ve tried, and when I get a, an email that’s phishing to, you know, try to get my data, I send it to my assistant and say, should I open this and sit back and he says, “no stupid you, you can tell right now that that’s not something you should have ever open”. And he’s right, but I don’t know how to tell that, and so I don’t want to be responsible for myself, I want my government to protect me. And I haven’t yet heard that you’ve said anything about how the Canadian government or even the United Nations much less, you know, can, can protect me by having new regulations. Is there way? Are there other technological solutions? And are there legal and institutional innovations that could protect me from, you know, phishing attacks, or from having my furnace turned on when I’m not wanting it to be on?

Branka Marijan  51:16

I think there’s there’s both and many others, I think there’s multiple levels at which we need to start addressing these things, right. So John can speak better to what, you know, tech companies and software companies or firms what they should be doing. But I think, in you, John mentioned the, you know, the national strategy in us addressing cyber attacks and cyber issues. And so I think there’s sort of responses that can come, you know, unfortunately, we also have a role to play and these digital skills are going to be evermore important. Particularly as we start relying more on, on different technologies and these connected technologies. So it will, there is some responsibility, I think the we have, but to be informed, and to know how to respond to these phishing attacks and other things. But I think there’s also a lot that the government can do, and that there are ways and regulations that can be developed. and having said all of that, and I wouldn’t want to end on a bad note. But I think there’s limitations to there will always be malicious actors who will try and manipulate things. But I think if we have a good strategy, and if we try and kind of foresee some of those things that that that isn’t absolutely necessary.

Metta Spencer  52:31

Okay, let me ask a final question that I’m aiming primarily at John, because John, you were the one who thought this up. You will remember the forum that we had at the end of May, called How to save the world in a hurry. And our project was to develop a list of 25 public policy innovations that if adopted, would reduce the risk of six global threats to humankind, really major, big, big threats, of which cyber attacks we now consider to be one. And we, we did develop exactly that. and one of these planks in our platform for survival is as follows. I’ll read it to you. Manufacturers of ICT hardware and software shall be liable for negligent security failures that cause harm. Well, I hardly know what that means. But we have to, I assume that this must be a policy innovation that would really be worthwhile and save us a lot of grief. So would you please explain it, John, why you think that what this would be what this proposal would entail, and why we should go for it?

John Daniele  53:55

So the reason why the committee settled on this as one of those public policy areas to focus on, is we’ve had many, many years of software development companies, basically self regulating themselves with respect to vulnerabilities. You know, they they’re developing code that gets used in a variety of different products that we use as consumers. vulnerabilities that are introduced into that that code that programming code, mistakes that criminals can take advantage of, have popped up and had some pretty devastating effects. When we take a look at whether companies are getting better in handling these kinds of defects that they introduced into code, specifically security defects. We don’t see in the last, let’s say 20 years a dramatic improvements in security. We’re still seeing many, many, many more examples of infrastructure code and, you know, the tools that we use, video cameras that we now buy all still devastatingly vulnerable to different kinds of deficiencies. And basically, it comes down to the fact that developers are still not spending enough money to make sure that before they release that product into the consumer world, that they have done all the due diligence that they could to ensure that that device is going to act and behave as expected, and is not going to produce some sort of vulnerability that others can can exploit to, to get to take advantage of their, their customers. So one of the things that we thought to do is introduce at least one punitive measure to say, Well, if the software industry hasn’t really done much to secure itself, perhaps we can incentivize them in some way. And the thought was, if there is let’s use this as an example, if there’s some sort of operational technology that’s used in the electrical grid, and there is a flaw associated with that device, which should have been caught during some sort of QA process. And that flaw represents negligence on part of the hardware manufacturer, or the software developer, that whatever damages that it creates, the organization should be held responsible for those damages to some extent. And the idea was, we can pool that money together and invest in a national infrastructure to deal with all the other different kinds of security issues that we might have. So the damages can be collected at a national level, pooled together, and that will go into Resilience Fund so that our government has the resources necessary to deploy, you know, different kinds of programs that would benefit our nation’s security. So that could be educational programs, that could be incident response programs, better intelligence, infrastructure, etc, etc. So that was the long term thinking behind that policy objective.

Metta Spencer  57:24

Okay, well, it sounded like part of the time you’re talking about the plaintiff, in a case of this kind would be the consumer, I mean, me if I bought a new app, and, and it was, it didn’t work, or I got a new phone today, as a matter of fact, which I’m having a hell of a time getting functional, and that somebody exists there that I should be able to sue for not having put the thing together properly, or the software in good working order. But then, on the other hand, sounds to me like you then, then drifted over into talking about international uses of these equipment as part of a military system. Now, if I think it’s, it sounds to me like we great to be able to sue somebody for a part that they had a defective screw that they put into a piece of equipment that made it blow up in my hands. But when it comes to the use of this equipment, let’s say it works perfectly, but it’s used to kill somebody in another country. Is the manufacturer going to be held responsible for the the application or the use of that software or hardware to harm people as a tool of national, not security, but national aggression? Let’s say?

John Daniele  58:56

So I think that gets into a whole separate issue that, that deals with, with with warfare. It’s not so much that, like there’s a there’s a civil system, the judicial system that we have, for, for, let’s say, plaintiffs to sue manufacturers for producing negligence all for the cause some sort of specific harm to them. What we were thinking is more of trying to find some way of maybe implementing kind of like attacks, if you will. If, if you know that there’s there’s damages that could be received by the plaintiff in court, but just the mere existence of a negligent vulnerability within a piece of hardware should have some kind of, you know, base punitive cost to it. So if we can figure out how to deploy a system whereby anything that’s released to the market that has any kind of negligent security failure, something that was really easy to find, programmers should have found through some sort of QA process and there’s different classes of vulnerabilities that are low hanging fruit that you can look at and say, well, you know, this stuff really shouldn’t be. This stuff shouldn’t be present in in a product that has been released to the market and has gone through different kinds of test protocols. So in those instances, let’s let’s, let’s establish some base cost that that software manufacturer or hardware manufacturer will pay, and let’s find some constructive use of those funds was the was the objective there. The intention of those funds and for offensive military operations, but rather, to bolster defensive infrastructure within each nation state was the idea.

Metta Spencer  1:00:41

Okay, I can I would have less objection to do that, although I’m not sure that’s going to defend human society from catastrophic cyber attack. I mean, fairly petty compared to the idea that an ICM could be hacked and lost, you know, somebody could get hold of our, our nuclear missiles or start a nuclear war intentionally. That is a bit of more of a concern than can we get a refund if our new cell phone doesn’t work right?

John Daniele  1:01:23

Well, it’s, it’s it’s specifically worrisome when we think about this class of technology called the Internet of Things or operational technology. So these are sensors that can be part of critical infrastructure systems, I think that’s where it becomes really, really important.  Today, if we take a look at as an example, you know, most solar powered infrastructure, you know, all the alternative type systems that we have battery, backup supplies, solar panels, etc, etc. You know, most of the most of the operational technology that controls these systems today that I’ve reviewed, are just horribly, horribly vulnerable. It’s, it’s right there easy for the picking, and it has to do with really cheap parts being used by manufacturers that are just utterly careless in developing the code for those sensors, those programmable logic controllers that are used in these systems, and these systems can also find their way into electrical grid systems, SCADA systems. and if you take a look at even SCADA systems that that we rely upon here for our own electricity grid, there are many, many examples of programmable logic controllers that just have inherent vulnerabilities that can be exploited. So that’s where I think it does have a direct effect.

Metta Spencer  1:02:49

Okay. Well, I think you’ve drawn us back to the point that Branka made a while ago, that a lot of things may be inadvertent. So if we have one of these defective parts put or software codes put into a system, hopefully, my personal cell phone or some other app that I’m using, but it is tied together in the Internet of Things. Then the possibility, I guess, would exist for it to have ramifications that would be national or global in scope.

John Daniele  1:03:23

I think so and the focus is not on, you know, inadvertent mistakes, it’s, you know, there should be a certain standard that we apply to ensure a base level with safety with these devices. We need to develop the standards, we need to enforce those standards. And I think what’s happening right now is the standards exist, but we have a voluntary system where organizations at or left to their own devices to decide whether to implement those standards or not. And by and large, what I’m seeing is that the large majority of those vendors are choosing not to deploy those secure standards, because or implement those secure practices because there’s too much of a cost. So when it comes down to, I’m releasing a shoddy piece of hardware, because I don’t want to spend the money to make sure that this is going to be safe for use. I think that, I think that’s that’s gross negligence, and we need to find some way of dealing with those kinds of scenarios. More than more so than, like, sure, this is also important with our cell phones. But it’s it’s most especially important with the kinds of technology that runs the backbones of our network. And that’s, that’s where the rubber really meets the road with respect to these kinds of negligent failures. And I’m just seeing just way too many negligent failures in devices. Like you know, there’s so many devastating hacks that I’ve looked into, from a forensic point of view that really boil down to bad products cheaply made, a horrible firmware with, with, littered with security vulnerabilities that really were rather easy to find.

Metta Spencer  1:05:12

Okay now I mentioned this behind your back before you arrived Branka and I were talking about it earlier, and she Branka you had some concerns about whether this kind of platform and our plank plank in our platform could be implemented. Do you, do you still have those concerns having her John now?

Branka Marijan  1:05:36

No, no, I think because John’s focus is a little bit different, and I having heard him explain it and kind of make sense. My concern was with one of the challenges we face, I think, is with dual use technologies. And technologies, let’s say, which are components in a system where the manufacturer, you know, did their due diligence, and it works as it should, but it’s you being used for a malicious purpose, or it’s being used in ways that the manufacturer couldn’t have foreseen, right. So small sensor that, you know, is very useful and for civilian purposes, could be also useful in the weapons system. And then that weapon system could be used, you know, in Yemen, or somewhere, you know, with drones, and we’re killing people. And so the, that’s where I was thinking about how, and I’m actually really interested in that, and understanding what kind of regulation could we develop for these dual use technologies. And so I think John’s point is really great, and I think it would be interesting to see that happen. and certainly, it would be possible within the national within the national institutional framework, that we have a legal framework. And I think internationally, the challenges are greater in terms of practically implementing that. Particularly when we talk to companies to say that, you know, their intention was never was to develop a particular system for that purpose. So that…

John Daniele  1:07:04

Oftentimes they would, they would never know, like, I’m thinking of like a Bluetooth transceiver being used in some sort of weapons system, they may never know that that was used in a weapon system. So sometimes you can get down to like really minutia detail. But but then there’s another example of encryption, encrypted traffic management systems that are used to, to basically manage the secure connections between your client web browser and another browser. They’re exchanging secure certificates. So that you can, your browser can trust the server that it’s connecting to, and vice versa. This sort of stuff is dual use. There are many different manufacturers in North America, that have sold this kind of technology to let’s say, Saudi Arabia is an example. Saudi Arabia turns around and uses this and widespread deployment, where they effectively have the ability to tap any sort of encrypted communications happening from the web browser of citizens, and they can examine those communications it will. China does a very similar thing with respect to the Great Firewall, there’s a bunch of proxies that are used that are handling those secure connections. and it’s North American software, and hardware manufacturers that are selling that technology to China that enables this massive surveillance network. So those are issues that I think are key and critical. There are challenges with dual use technology right across the board, that are that are much more obvious, and much more apparent that will you know exactly what this technology is going to be used for. You chose to go ahead and close that deal anyway. And the reason why they’re doing it is because these deals are highly lucrative. But, but there there are ethical considerations that are being sidestepped sidesteps, that for for profit. And I fully admit those kinds of things are, are happening. You know, there there are companies in North America that are enabling massive surveillance networks around the world and some very large, very well known companies are complicit in this.

Metta Spencer  1:09:28

Not just well known companies.

Branka Marijan  1:09:29

[seems] we’re in agreement.

Metta Spencer  1:09:32

Nation states of whom we are somewhat familiar already at the moment. It’s a pleasure to wind up realizing that of course we’re nine minutes over time, but we have such a good time talking that I didn’t want to stop anybody. And the beauty is that you two seem to agree on most things. It’s nice to wind up in harmony. The only thing unpleasant about it is I still don’t know what to worry about tonight. But when I go to sleep what, what shall I have nightmares about? And I guess you can’t help me with that. But I’ve been very grateful to you for, for this for this very interesting conversation. And I’m sure lots and lots of our viewers will also find it illuminating if, if worrisome, and I hope I can ask everybody to tune in again next week. Because every week from eight to nine Eastern time, I’ve been holding conversations of this kind with local friends or people in other countries as a matter of fact, sometimes about various global issues various global threats. and it’s part of our project called Project Save the World. and we’re gonna keep after this because there is a world here to save, and it needs every bit of help that it can get. So I’m grateful to both John and Branka for doing their part to save the world from cyber threats. See you next time. Thank you. Thank you.

Intro/Outro  1:11:10

This conversation is one of the weekly series talking about saving the world produced by Peace magazine, and Project Save the World. Please visit our website tosavetheworld.ca where you can sign the platform for survival. A list of 25 public policy proposals that if enacted would greatly reduce the risk of six global threats to humankind. Come back next week for another discussion of a serious global issue.

 

 

Podcast

Comments

To Post a Comment

Please wait a few seconds for the comments to load at the bottom of this page. Then read the ideas other people have shared and reply or add your own knowledge. The space for comments is in a pale font. It’s good to give your comment a title by selecting it and clicking the “B” (for “boldface”). And you can italicize passages with the “I”, indent, add hyperlinks (with the chain symbol) or even attach a photo or graphic from your hard drive by clicking the paperclip at the right side of the space. Have fun with it!

Subscribe
Notify of

0 Comments
Inline Feedbacks
View all comments

Select the Videos from Right

We produce several one-hour-long Zoom conversations each week about various aspects of six issues we address. You can watch them live and send a question to the speakers or watch the edited version later here or on our Youtube channel.