Author: Paul Meyer
Chair, Canadian Pugwash Group | Senior Advisor, ICT4Peace
Cyberspace, the broad term for the system of networked computer systems for which the Internet is the chief embodiment, is a unique, human-created environment. The potential of information and communication technology to benefit humanity is vast and the growth in its use world-wide has been exponential. Today close to four billion people are connected to the Internet and a community of “netizens” has emerged.
Unfortunately, the growth of cyberspace has not been matched by a similar development of global governance for it. Even more worrisome, is the degree to which cyberspace has become “militarized” with states developing capabilities, not only for the defence of their own systems, but also offensive capabilities that threaten damage and destruction to entities beyond their borders. These trends within national security establishments of leading cyber powers have accelerated and the detrimental impact of cyber operations on civilian interests has grown. A narrative of “cyber war” has been espoused by major states, depicting this remarkable product of human ingenuity as just another “war-fighting domain”.
Fortunately, amid these disturbing developments there has also emerged a constituency advocating for maintaining cyberspace for peaceful purposes. Embracing stakeholders from government, civil society and the private sector, various initiatives have begun to take shape to promote the goal of a peaceful cyberspace and to insist on norms of responsible state behaviour in cyberspace. In parallel, “netizens” are requiring the information technological industry to take full responsibility for ensuring the security of the products they sell to consumers.
Two key demands or planks of a platform for remedial action, one that reflects both the external and internal concerns over cyber security, are for states to commit to cooperative security arrangements and the industry to accept responsibility for what is put on the market. The first idea is for the United Nations and similar organisations to insist on a peaceful cyberspace and to hold states to account via binding arrangements specifying norms of responsible state conduct.
The second idea is to require manufacturers of cyber hardware and software to assume liability for negligent security failures in these products that cause significant harm.
As the overwhelming owners and users of the Internet it is incumbent on civil society and the private sector to press governments to take appropriate action to ensure that cyberspace is preserved for peaceful purposes in the interests of all.
Video credit: ICT4Peace Foundation. A longer interview is available on YouTube at https://youtu.be/BveJ3V1ADUo.
How to Post a Comment
1. Give your comment a title in ALL CAPS. If you are commenting on a forum or Peace Magazine title, please identify it in your title.
2. Please select your title and click “B” to boldface it.
You can:
• Italicize words by selecting and clicking “I”.
• Indent or add hyperlinks (with the chain symbol).
• Attach a photo by copying it from another website and pasting it into your comment.
• Share an external article by copying and pasting it – or just post its link.
We will keep your email address secure and invisible to other users. If you “reply” to any comment, the owner will be notified, providing they have subscribed. To be informed, please subscribe.
** If you are referring to a talk show, please mention the number
We produce several one-hour-long Zoom conversations each week about various aspects of six issues we address. You can watch them live and send a question to the speakers or watch the edited version later here or on our Youtube channel.
WHAT IS QUANTUM ARTIFICIAL GENERAL INTELLIGENCE?
BY DR. JOHN PAUL WERBOS
Tuesday, November 28, 2023
Artificial General Intelligence AGI: What it really is, why it is taking over, and why only a new QAGI could save us
There was a huge news story about AI and AGI which rightly shook the world over the past two days:
https://www.youtube.com/watch?v=Q9-grdoIgUw
What shook me most was a clear statement by Sam Altman, head of OpenAI, depicting a commitment to move ahead with lots and lots of apps making money in the short term without putting much energy into cross-cutting or integrative solutions.
In many ways, the really big issue is whether the human species is capable of working together to develop that level of integration which is necessary to avoid the total chaos and instability (leading to extinction) which is on its way NOW unless we work better and more effectively to use our own natural intelligence, WITH AI and such used as positive tools.
OVERVIEW FOR HIGH DECISION MAKERS
The key acronym AGI, Artificial General Intelligence (AGI), promulgated many years ago by Ben Goertzel, is finally getting the high-level global attention it deserves. The world badly needs all of us to connect better and deeper, to do justice to the interconnected technical and policy issues which AGI is already pushing us into very rapidly.
BUT FIRST: WHAT **IS** AGI?
I have seen many, many definitions for many decades.
I first heard Ben’s talk in person in the WCCI2014 conference in Beijing, where I presented my own concept of AGI AT THE LEVEL of mammal brain intelligence. https://arxiv.org/abs/1404.0554 . The NSF of China and the Dean of Engineering at Tsinghua immediately invited USGOV to work together on a joint open global R&D program — but soon after I forwarded that to NSF, certain military intelligence contractors objected, and arranged for the US activity to be cancelled, leaving the field to China. (YES that was very serious!)
Phrases like AGI are not defined by God. We all have a right to work with different definitions, so long as we are clear.
=== LIKE SOME OF YOU, I would firmly reject the old Turing test as a definition of what an AGI is. Even Turing himself used much more powerful mathematical concepts when he moved on from early philosophical debates to mathematics that can actually be used in computer designs! (I bcc the friend who showed me Turings Cathedral by Dyson, a great source.) The Turing Test makes me laugh about Eliza, perhaps the first AI-based chat program, developed at KIT decades ago, which showed many of us just how incredibly shaky the Turing test really is.
I would propose that we define an AGI as a universal learning system, which learns to perform either cognitive optimization or cognitive prediction as defined in the NSF research announcement on COPN which is more advanced than any such announcement elsewhere even today:
https://www.nsf.gov/pubs/2007/nsf07579/nsf07579.htm
In other words… universal ability to learn to adapt to any environment, with maximum expected performance, or to predict or monitor any time-series environment over time.
TODAY, I created a googlegroup on QuantumAGI to facilitate easier discussion of the most important players in the real technology creating
a POSSIBILITY of true quantum cognitive prediction or optimization, or function minimization/maximization.
===
Years ago, in the crossagency discussions which created COPN, my friends who ran cognitive science and AI in computer science asked: “Do we want to set the bar so high? ” I asked: “Should we really use the word ‘intelligent” to refer to systems which cannot even learn anything?” In fact, people with long and deep experience in classical AI knew about Solomonoff priors, one key approach to universal learning-to-predict, which Marvin Minsky himself urged me to study in the 1960s when I took an independent study from him.
The mathematical foundation for the most powerful, universal cognitive prediction now emerging, using classical computing and deep neural networks, is reviewed at: werbos.com/Erdos.pdf. QUANTUM AGI extends that further, simply by doing orders of magnitude better in the loss function minimization tasks at the core of all general effective cognitive prediction methods. EXAMPLES of thermal quantum annealing, in relevant special cases, have already demonstrated that advantage, as shown in papers from IBM and Japan and others at
https://www.nsf.gov/pubs/2007/nsf07579/nsf07579.htm.
=========================================
IS IT REALLY SAFE TO UNLEASH AGI AND QAGI ON THE EARTH, GIVEN HOW SCARY THE PRESENT TRENDS ARE??
Many of us, including me, have thought VERY long and hard on that.
Based on the recent talks from Ilya and Altman, etc., I believe that we are presently on course to a very intense and difficult future, similar to the kinds of massive changes in niche which have doomed the world’s leading species to extinction again and again over the millennia. We are in the kind of decision situation which meets the technical concept of a “minefield” situation, which we are unlikely to survive unless we build up quickly to a level of collective cognitive optimization beyond ANY of today’s AGI or social institutions.
FURTHERMORE…. as in my new book chapter attached (book coming out next month or January from India Foundation), I really doubt that our cosmos lacks intelligence at the level of QAGI already. Keeping up with that level of collective intelligence may simply be ESSENTIAL to our best chances of survival as a species.
YES, there are HUGE dangers if this is developed in the dark. That is why I believe in the necessity of open, transparent international development, including even leadership in the QAGI technology itself in new international venues.
another version with details for substantive technology leaders
HOW AGI WORKS —
There are a few different definitions out there about what AGI (Artificial General Intelligence) actually **IS*. YOU ALL can rightly use many ways of handling definitions, because you communicate with different audiences. Please forgive me if I still adhere to many commitments of John Von Neumann, the mathematician whose work underlies MANY branches of science. Von Neumann would tolerate me giving you ONE or TWO useful definitions of AGI, and explaining where it leads.
AGI: universal learning machines, a kind of INTENTIONAL SYSTEM, designed to input some measure of “cardinal utility” U, and to learn the strategy of action or policy which will maximize the expectation value of the future value of U. In modern neural network mathematics, the best way to name these is to call them “RLADP” systems, Reinforcement Learning and Approximate Dynamic Programming. Even today, the old book “Neural Networks for Control” by Miller, Sutton and Werbos from MIT Press is an important source for learning what this means in practice, and understanding where key places like Deep Mind are really coming from. These are systems which LEARN TO DECIDE, in an agile way.
BUT THERE IS NO ESCAPING the essential importance of “where does U come from?” This is basically just a modern reflection and extension of the most ancient problems of philosophy; Von Neumann’s concept of U traces back clearly to utilitarians like Jeremy Bentham and John Stuart Mill, and back from there to Aristotle’s Nicomachean Ethics, which I remember reading at age 8 when I found it in my mother’s old schoolbooks.
BUT: a more practical definition: modern AGI, in practice involves THREE elements, three types of universal learning machine. There is RLADP, which learns to exert decision and control (which has be applied to anything from monetary transactions to weapons control to words to energy systems). There is learning to predict or model or describe the state of the world, which FEEDS INTO making better decisions. And there is the “simple task” of learning to minimize some function F(W) with respect to weights W.
THE problem of survival for humanity is an example of an RLADP problem, where we try to maximize the probability of human survival, which of course requires further definition and refinement. FOR NOW —
THE OPENAI debate reminds me that the problem of human survival or exaltation is a specific TYPE of RLADP problem, which mathematicians would call “highly nonconvex.” Concretely, it is a MINEFIELD problem, where the paths of possibility ahead of us mostly hit explosive “unexpected” sudden death — but also with aspects of “needle in a haystack” where there are GOOD possibilities we might miss. SOLVING such problems requires a lot of caution and foresight, which is why stronger work in foresight is essential to human survival. SUCH RLADP problems end up requiring solution of highly nonconvex function minimization or maximization problems.
Early in this century, NSF organized the most advanced research effort ever in probing this mathematics AND connecting it to the intelligence we see in mammal brains: https://www.nsf.gov/pubs/2007/nsf07579/nsf07579.htm
Following that program, I often say “cognitive optimization” to refer to RLADP and intelligent function minimization/maximization. “Cognitive prediction” refers to that other universal learning capability, which is advanced further in werbos.com/Erdos.pdf and in Buzsaki’s recent book on the brain as a prediction machine.
I attach my paper in press from the India Foundation, and another in a book now available by Kozma, Alippi, etc, giving even more details.
Quantum AGI, as I define it (THE canonical definition created in my published papers and patent disclosure), simply ENHANCES these three universal learning capabilities — RLADP, prediction/modeling and function minimization — by HARNESSING the power of quantum physics AS DESCRIBED BY THE GREAT PHYSICIST DAVID DEUTSCH OF OXFORD.
You could call this “quantum cognitive optimization” and “quantum cognitive prediction.”
The foundation which all QAGI is built on is minimization or maximization of nonlinear functions.
It was initially developed (by me) to address minefield or needle in a haystack types of problem, though it looks as if the new types of quantum computers will also give many other improvements.
Here is a metaphor: if you had a million haystacks or gopher holes in your big back yard, to FIND the best needle in a haystack (or deepest gopher hole), WHY NOT HIRE A MILLION SCHRODINGER CATS to work in parallel, and report back which is best?? A million times faster than one-at-a-time search!!
Deutsch’s Quantum Turing Machine is not a brain or an AGI; just a faster type of old sequential computer, a Turing machine.
DWAVE was a HUGE mental leap forward, which would FIT the vision I just described… BUT ONLY if the function minimization at the core of the system is replaced by the kind of hardware which ACTUALLY harnesses these cats. (DWave is like paying for a million cats, but putting them on a leash, locking them up on a patio or a restricted sidewalk. Strong efforts at energy conservation have that effect.)
The papers in our Project Amaterasu folder and recent emails describe how Deutsch’s physics works here, and how to build the hardware.
The UN Open-ended Working Group on the security and use of Information and Communication Technologies (ICT) held its fourth session July 24-28, 2023 in New York. Allison Pytlak of the Stimson Center has written an insightful account of these proceedings and the main points in contention: article.
The Worldwide Cyber Security Industry is Projected to Reach $266 Billion by 2027
The global cyber security market size is expected to grow from an estimated value of USD 173.5 billion in 2022 to USD 266.2 billion by 2027, at a Compound Annual Growth Rate (CAGR) of 8.9% from 2022 to 2027.
The increased number of data breaches across the globe, the ability of malicious actors to operate from anywhere in the world, the linkages between cyberspace and physical systems, and the difficulty of reducing vulnerabilities and consequences in complex cyber networks are some factors which are driving the cyber security market growth. However, the lack of cyber security professionals and the lack of budget constraints among SMEs and start-ups in developing economies are expected to hinder the market growth.
By Organization size, SMEs to grow at the highest CAGR during the forecasted period
Small and medium-sized businesses (SMEs) from a variety of industries are going through a digital transformation and using cloud computing to streamline operations, increase mobility, get rid of on-premises technology, and save costs. To protect their online applications and Application Programming Interfaces (APIs) against unwanted access, vulnerabilities, and attacks, SMEs are using cybersecurity solutions and services. Cybercriminals use automated techniques to attack SMEs’ networks in order to take advantage of their weak security infrastructures.
Therefore, in order to save money, time, and resources, SMEs are seeking cyber security solutions. Additionally, governments are acting to safeguard SMEs in their own nations. But significant problems including operational activity budget restrictions, a lack of capital funding, and a shortage of qualified workers are anticipated to impede market expansion for SMEs in developing nations.
SMEs are vulnerable to new security problems as they implement digitalization at an increasing rate. As a result, the IT departments of SMEs make investments in implementing cyber security solutions. Thus, the SMEs are to grow at the highest CAGR during the forecasted period.
By Vertical, the aerospace and defence vertical account for a larger market size during the forecasted period
The civil and military aerospace and defence procurements are included in the aerospace and defence verticals. The rate of growth of security risks in the aerospace and defence sectors is alarming. This vertical is intended to harvest extremely sensitive and confidential data from important sectors, such as the government, prime contractors, and suppliers. Big data and increased digitization in nearly every element of the armed forces raise the likelihood of cybercriminal attacks.
The use of IT and telecommunications tools like RADARs and encryption-based wireless technologies for secure communication are the main drivers, and they will assist in expanding the markets. Additionally, the sector is undergoing a significant digital change, which has increased the need for cyber security services and solutions. Thus the aerospace and defence vertical accounts for a larger market share during the forecasted period.
By Organization size, large enterprises to grow at the highest market size during the forecasted period
Large enterprises and SMEs may protect themselves with the help of cyber security solutions from cyberattacks that aim to breach and undermine their IT infrastructure. For the purpose of protecting their critical assets, large organizations throughout the world continue to implement cyber security solutions at a rapid rate.
In order to include security solutions and services for defending vital assets from cyberattacks, large organisations are redesigning their security policies and architecture. To protect networks, endpoints, data centres, devices, users, and applications against unauthorised use and harmful ransomware attacks, they heavily rely on cyber security.
Large businesses are increasingly using access management tools to enable privileged access to servers and online applications, which promotes market expansion. Large enterprises are more likely to employ cyber security solutions as a result of huge budgets implementing top-notch security solutions and the strong demand for real-time auditing and monitoring of the growing IoT traffic. Thus large enterprises are to have the highest market growth during the forecasted period.
More Information: https://www.globenewswire.com/news-release/2022/09/22/2520978/0/en/The-Worldwide-Cyber-Security-Industry-is-Projected-to-Reach-266-Billion-by-2027.html
What shape should a Cyber Security Programme of Action take?
The UN Open-Ended Working Group (OEWG) on the security of and use of Information and Communication Technology (ICT) is on-going with three sessions held to date and a mandate continuing to 2025. One proposal initially submitted in late 2021 has been endorsed by 60 states is the “Programme of Action” (PoA). The PoA is intended to establish a “permanent mechanism” under UN auspices for consideration of cyber security matters with a series of follow-up meetings. Much remains to be clarified however as to exactly what a PoA would accomplish.
Allison Pytlak, Disarmament Programme Manager at Reaching Critical Will has produced a useful paper on the PoA concept: https://reachingcriticalwill.org/images/documents/Publications/report_cyber-poa_final_May2022.pdf
The paper recalls how PoAs have been utilized in the past to address other UN issue areas of concern and brings out common features. Pytlak draws attention to some of the priority areas for elaboration of a cyber PoA and makes several recommendations as to aspects that could feature in the next iteration of the PoA concept. These include having the co-sponsors develop a “pre-draft” text of a PoA, incorporating some form of accountability mechanism and specifying how non-governmental stakeholders can be engaged in the development and implementation of a future PoA.
Pytlak’s paper will be a highly valuable resource for participating officials and stakeholders in the OEWG process and provides an impetus for an outcome that is more operational than simply declaratory in nature.
Allison Pytlak’s proposal sounds promising. I wonder whether it could be publicized among people who are concerned with digital security issues to invite their comments as well? It”s great that there’s an OEWG to deliberate on this important matter, but eventually it needs to be a public discussion, if not indeed a social movement. Everyone in the world is affected by these issues nows.
THEY NEVER GET CAUGHT
How many times a day does someone try to steal from you onlline or on your phone? There is at least one fraudulent letter in my email every day, and almost every day I get a call from someone pretending to be my bank or credit card company, trying to drag me into something dangerous. I cannot tell real things from fakes. The police never catch them. I doubt that anyone is even trying to catch them.
So is the United Nations going to solve this? I doubt it. Don’t you?
Pro-Ukraine ‘Protestware’ Pushes Antiwar Ads, Geo-Targeted Malware
Brian Krebs | 17 March 2022
Researchers are tracking a number of open-source “protestware” projects on GitHub that have recently altered their code to display “Stand with Ukraine” messages for users, or basic facts about the carnage in Ukraine. The group also is tracking several code packages that were recently modified to erase files on computers that appear to be coming from Russian or Belarusian Internet addresses.
The upstart tracking effort is being crowdsourced via Telegram, but the output of the Russian research group is centralized in a Google Spreadsheet that is open to the public. Most of the GitHub code repositories tracked by this group include relatively harmless components that will either display a simple message in support of Ukraine, or show statistics about the war in Ukraine — such as casualty numbers — and links to more information on the Deep Web.
For example, the popular library ES5-ext hadn’t updated its code in nearly two years. But on March 7, the code project added a component “postinstall.js,” which checks to see if the user’s computer is tied to a Russian Internet address. If so, the code broadcasts a “Call for peace:”
Read more
A more concerning example can be found at the GitHub page for “vue-cli,” a popular Javascript framework for building web-based user interfaces. On March 15, users discovered a new component had been added that was designed to wipe all files from any systems visiting from a Russian or Belarusian Internet address (the malicious code has since been removed):
“Man, I love politics in my APIs,” GitHub user “MSchleckser” commented wryly on Mar. 15.
The crowdsourced effort also blacklisted a code library called “PeaceNotWar” maintained by GitHub user RIAEvangelist.
“This code serves as a non-destructive example of why controlling your node modules is important,” RIAEvangelist wrote. “It also serves as a non-violent protest against Russia’s aggression that threatens the world right now. This module will add a message of peace on your users’ desktops, and it will only do it if it does not already exist just to be polite. To include this module in your code, just run npm i peacenotwar in your code’s directory or module root.”
Alex Holden is a native Ukrainian who runs the Milwaukee-based cyber intelligence firm Hold Security. Holden said the real trouble starts when protestware is included in code packages that get automatically fetched by a myriad of third-party software products. Holden said some of the code projects tracked by the Russian research group are maintained by Ukrainian software developers.
“Ukrainian and non-Ukrainian developers are modifying their public software to trigger malware or pro-Ukraine ads when deployed on Russian computers,” Holden said. “And we see this effort, which is the Russians trying to defend against that.”
Commenting on the malicious code added to the “Vue-cli” application, GitHub user “nm17” said a continued expansion of protestware would erode public trust in open-source software.
“The Pandora’s box is now opened, and from this point on, people who use opensource will experience xenophobia more than ever before, EVERYONE included,” NM17 wrote. “The trust factor of open source, which was based on good will of the developers is now practically gone, and now, more and more people are realizing that one day, their library/application can possibly be exploited to do/say whatever some random dev on the internet thought ‘was the right thing they to do.’ Not a single good came out of this ‘protest.’”
Read More Here: https://krebsonsecurity.com/2022/03/pro-ukraine-protestware-pushes-antiwar-ads-geo-targeted-malware/
Re: The Government’s approach to address harmful content online
Submitted by: Rose A. Dyson Ed.D.
President: Canadians Concerned About Violence In Entertainment
Vice President: World Federalist Movement of Canada: Toronto Branch
Author: MIND ABUSE Media Violence And Its Threat To Democracy (2021)
email: rose,dyson@alumni.utoronto.ca or rdyson@oise.utoronto.ca
Phone: 416-961-0853 or 647-382-4773
Dear Committee Members
Thank you for the opportunity to participate in this discussion on meaningful action to combat hate
speech and other kinds of harmful content online. Public concern about harmful media content has now been with us for several decades and the need to address the problem has gotten increasingly urgent. The five categories identified as hate speech and other kinds of harmful content online, including child sexual exploitation, terrorist activity, content that incites violence, and the non-consensual sharing of intimate images have skyrocketed as communications technologies have evolved.
Read more
As far back as 1975 Judy La Marsh, a lawyer, journalist and former member for the Liberal Government of Canada was appointed by the Government of Ontario to chair the Royal Commission on Violence in the Communications Industry. It was empowered to study the effects on society of increasing violence in the media of the day and make appropriate recommendations on measures to be taken by different levels of government, by industry and the public at large. Most of the 80 plus recommendations have never been implemented. Some have been repeated in subsequent studies but still not implemented.
In my doctoral thesis, completed at OISE/UT in 1995, I reviewed the research findings conducted by the La Marsh Commission and other studies done up until that time, subsequent recommendations and evidence or lack thereof regarding implementation. Two books on the subject followed. The first published in 2000 and the second earlier this year. A complimentary copy of either one is available upon request. The latest is titled, MIND ABUSE Media Violence And Its Threat To Democracy, (2021) Over the past 30 years I have watched the problems mushroom with increasing evidence of commercial reliance on themes of sex and violence in media production. In addition we have had fading boundaries between different forms of media. These include news, fiction, advertisements and educational programming, leading to catch phases such as edutainment and infotainment.
Digital technologies and the internet have magnified the problems with policy makers loath to take on the challenge of much needed and overdue regulation, frequently to avoid accusations of censorship. Inadequate distinctions between individual freedom of expression and corporate freedom of enterprise have persisted. Periodic studies funded by industry are released into the public domain countering evidence of harmful effects thus ensuring no interruptions to business as usual. For decades the cultural industries have been given carte blanche to determine what we see, hear and read.
In 1996, along with 250 other scholars and media activists representing over 88 organizations from around the world, I helped the late George Gerbner, an internationally renowned media scholar, launch the Cultural Environment Movement at Webster University in St. Louis. That Convention was preceded by the International Summit on Broadcast Standards attended by Keith Spicer, then chair of the CRTC and other Canadians representing business and non-profits. In his work, Gerbner frequently referred to violence creep in popular culture and other forms of media, including news and advertisements, as the hidden curriculum for a Mean World Syndrome.
My colleague, retired U.S. Lte. Col. David Grossman, a psychologist and Military Expert, has written 5 books on the subject of violent first person shooter video games and the dangers of indiscriminately marketing these games to the youngest most vulnerable people on the planet. In his latest book, Assassination Generation Aggression, Video Games and the Psychology of Killing (2016) he provides chilling detail on how these have led to mass murders and fueled terrorism. Grossman reveals how violent video games have ushered in a new era of mass homicides worldwide. The trends have led to what he calls Acquired Violence Immune Deficiency Syndrome.
The kind of online hate and extremism that led to the January 29, 2017 mass murders at the Centre culturel islamique de Quebec, and on March 15, 2019, in Christchurch, New Zealand, is inherent in the thematic content of numerous video games played by the killers. In both cases news coverage identified evidence of heavy diets of first person shooter video game playing on the part of these perpetrators. This is a pattern that is described over and over again by other researchers among them, Mark Bourrie, author of Martyrdom, Murder and the Lure of Isis, and Megan Condis, author of Gaming Masculinity, Trolls, Fake Geeks, and the Gendered Battle for Online Culture.
What must be recognized is that the Government’s focus on regulating social media and combating harmful content online cannot be confined to “speech only”. Violent forms of fictional entertainment such as video games depict storylines that glorify violence, hatred, anti semitism and sexual exploitation. It would be duplicitous and of marginal value to address the problems involving work place harassment, misogyny and other excesses on the internet but to leave such content in popular culture unaddressed and unregulated. Countless studies over the years have demonstrated that these fictional depictions lead to learned behaviours based on psychological conditioning that result in distorted value systems, a tendency to resort to violence as a conflict resolution strategy, addiction and feelings of victimization, among other harmful effects.
It has also been demonstrated that violent, first person shooter video games provide fertile soil for sowing the seeds of resentment among young vulnerable white males. An “us versus them” mentality is encouraged, helped along by social media algorithms that capitalize on our genetic tendencies to respond quickly to negative themes. It has also been reported that white supremacist groups watch the latest releases of video games that are most amenable to their purposes of recruitment. Some have taken to producing their own.
The work being done by technology experts like the Institute of Electric and Electronic Engineers (IEEE)on a roadmap for 5G and global integration to facilitate the more efficient use of energy must also focuson the nature of energy use. Spokesmen on behalf of the Institute now stress that more efficient use of what is rapidly becoming unsustainable energy demand on the internet is essential and required to reduce both collective and individual carbon footprints. But, clearly, emphasis on discretionary use is also required. Assuming we are put on a war time footing, as advocated by Seth Klein in his book, A Good War: Mobilizing Canada For The Climate Emergency (2021), rationing of internet use will have to be adopted. In December, 2020, Nicholas Kristoff wrote in the New Yor k Times that Pornhub, owned by Mindgeek in Montreal, was the third most visited and influential website on the Internet. It is inconceivable, in a world focused on sustainability and transitioning to clean energy that, on the Internet, harmful excesses are overlooked and excused as essential components to be protected under the umbrella of civil liberties. Surely the expertise in electronic engineering should not be misdirected in the race against time to ensure internet use that fosters social harm.
There are also concerns expressed by health advocates, such as Devra Davis, author of DISCONNECTThe Truth About Cell Phones, What the Industry Has Done To Hide It and How To Protect Your Family(2010), about harmful radiation from digital devices that can cause cancer. In this context it behooves thegovernment to take note of the recent United States Court of Appeals for the District of Columbia Circuitjudgement in favour of environmental health groups. It found the Federal Communications Commission(FCC) in violation of the Administrative Procedures Act for not responding to comments onenvironmental harm. In short, the FCC failed to respond to record evidence that exposure to low levelradiation from digital devices may cause negative health effects
Re: Strategy to combat hate speech and other harms:
We endorse the move to amend the Canadian Human Rights Act to enable the relevant Commission and Tribunal to review and adjudicate hate speech complaints.
* But, over reliance on industry, itself, to monitor social media content, has proven in the past to be an exercise in futility. One minor exception involves the Canadian Broadcast Standards Council which was set up in 1993 by the Canadian Association of Broadcasters to respond to complaints of inappropriate content on radio or television programming. This Council could be expanded or duplicated to monitor online content. However, the Council has always been reactive rather than proactive with no oversight for industry excesses unless complaints arise from the public at large. That needs to change. Allowing the fox to guard the henhouse with no government oversight has never worked.
* Second, definitions of obscenity and sections on child pornography need to be updated and
expanded. Research conducted in the latter part of the last century, demonstrates how all pornography can be addictive. In addition it involves social learning theories that lead to themes of aggression and dominance. These tendencies can trickle down to the most vulnerable targets of exploitation which are children. Before the bill on child pornography, making possession, production and distribution a crime was passed in 1993, considerable attention was paid by the Government’s Standing Committee on Culture and Communications set up at that time. It came out with a number of additional recommendations that were never implemented. One of them was to determine the criminal legislative measures needed to include extremely violent forms of entertainment in the Criminal Code in ways that would conform with the Charter of Rights and Freedoms. See MIND ABUSE Media Violence In An Information Age (Dyson, 2000).
* The objective to authorize the Government to include or exclude categories of online communication service providers from the application of the legislation within certain parameters is important but there must be complete transparency on how this will be done and who will provide expert advice on these parameters. Advice must be sought from health providers and other researchers not beholden to industrial interests.
* Film and video game monitoring of media content for entertainment purposes is now undertaken by provincial classification boards. A national system would be much more efficient. While great care has been taken over the years to ensure gender and racial diversity on most boards the overall tendency has been for them to bend to the will of industry. Criteria on what is age appropriate should involve input from child development experts. This has yet to happen. Indeed, the prevailing standard for most classification boards throughout the developed world has been set by the industry funded and operated, Hollywood based Motion Picture Association of America. That needs to change.
* Legislation should be passed on a national level to ban advertising to children 13 years and under. Such legislation has been in effect in Quebec for over 25 years. From time to time, bills for implementation have been introduced in Canada at the national and provincial levels of government, boards of health and in 2016 even an editorial in Globe and Mail, called for one. Most developed countries have already adopted this kind of legislation, citing various concerns, among them, protecting children from harmful sexual exploitation, violent content, all advertising, the marketing of junk food known to cause physical health problems such as obesity and heart disease and the dangers of exposure to low level radiation from the internet.
* The Committee must not allow itself to be intimidated by industry push back. On January 14, 2019, it was reported in The Globe and Mail, that a proposal from Health Canada to amend the Food and Drug Act by restricting food and beverage marketing to children had hit a familiar snag: industry protests that such regulation was “unrealistic”, “punitive” and “commercially catastrophic”. The huge jump in commercial exploitation of children in recent decades is nothing short of tragic. According to the Harvard Medical School founded, Boston based, Campaign for a Commercial-free Childhood, over $17 billion was spent by the industry in 2006 in the U.S. alone to market products to children, a staggering increase over $100 million spent in 1983. Over $500 billion in purchases annually by that time was estimated to be influenced by children under the age of 12 years. These trends are clearly at odds with efforts focused on reducing consumer driven habits to facilitate future sustainability.
* A very popular solution for dealing with harmful media has always been better vigilance from parents, along with media and digital literacy taught in schools by teachers. Although it is obvious that the problem is too big and pervasive and that better cultural policy is also urgently needed, there is room for improvement in the provision of reliable, fact based educational resources. Over the years there has been increasing evidence of subtle, industry friendly resources creeping into school curriculums on the subject. In 1975, the La Marsh Commission recommended that an Advisory Board of educators, health professionals and parents be established at the Ontario Institute for Studies in Education at the University of Toronto for the provision of public education. I reiterated the recommendation in my doctoral theses completed at the Institute in1995, and again in my two subsequent books on media violence. Nevertheless, it has yet to be established. Better government funding and support is also needed for NGOs, such as Internetsense First, founded by Charlene Doak Gebauer, which now provide urgently needed help to parents and teachers on digital supervision.
* Funding that is independent of industry donors, should be mandatory to ensure accuracy in monitor media violence and other harmful trends on the internet. Important models were established at the Annenberg School of Communication, University of Pennsylvania and Temple University in Philadelphia, by the late George Gerbner. The Cultural Indicators Model, later expanded into the “Fairness” Indicators Model and used by Paquette and de Guise at Laval University in Quebec City in their study Index of Violence in Canadian, Television done in 1994, is one example.
* An Act respecting the mandatory reporting of Internet child pornography by persons who provide an internet service is needed. But it is not clear how this would interface with the Mandatory Reporting Act.
* New legislation requiring regulated entities to monitor harmful content through the use of automated systems based on algorithms would be a useful way to use the new technology for prosocial purposes, given the widespread evidence of how algorithms are currently employed solely for the purposes of financial gain and fostering errant behaviour .
* Now, within universities across Canada and beyond, there is growing emphasis of courses offered in esport involving first-person shooter video games. This is counter productive to advocacy from experts calling for critical thinking skills, media and digital literacy and studies which point to harmful effects. There has also been ample evidence reported in The Globe and Mail, of generous subsidies given to video game industries such as Ubisoft without any regard for the nature or content involved in the productions. Tax breaks and subsidies for harmful video game production and distribution is no more justifiable than breaks for fossil fuel industries in a time of climate crisis. As pointed out by Globe and Mail business reporter Scott Barlow, this poses a moral dilemna (Barlow, October 14, 2017). Furthermore, these must also not be excused or spun by industry pundits as “funding for electronic arts”.
* It is stated that regulated entities would be required to notify law enforcement in instances where there are reasonable grounds to suspect imminent risk of serious harm to any person or property from potentially illegal content falling within the five categories of harmful – terrorist content; that which incites violence; hate speech; non-consensual sharing of intimate images; and child sexual exploitation. But it is stated that there would be no obligation to report such content to law enforcement or CSIS. Why not?
* And why would the threshold for such reporting of potentially terrorist and violent extremist content be lower than that for potentially criminal hate speech?
* The proposed legislation for a new Digital Safety Commission of Canada to support three bodies that would operationalize, oversee and enforce the new regime sounds promising. But who exactly would sit on the final stage of recourse on the Recourse Council? Diverse expertise and membership that is reflective of the Canadian population is essential to avoid having such a Council stacked with former or retired officials sympathetic to the concerns of industry. This would necessitate expertise from the health and social sciences. Transparency in public reporting obligations would also be required.
* An Advisory Board that would provide both the Commissioner and the Recourse Council with expert advice must include more than expertise on emerging industry trends, technologies and content-moderation standards. Who would be expected to provide information on “content- moderation standards”. Like the recommended advisory group for parents and teachers, with funding independent of industry sources and the Recourse Council, such a Board should include social science expertise and input from both physical and mental health experts. Having the Digital Safety Commissioner of Canada mandated to lead and participate in research and programming, convene and collaborate with relevant stakeholders and support regulated entities in reducing the five forms of harmful content will only work if input is not confined to industry related interests. Again, the composition of the Advisory Board must include, along with all the other stakeholders itemized, health expertise.
Re: Compliance and enforcement
* The powers of the Commissioner are necessary and sound reasonable. Re: Modifying Canada’s existing legal framework including the Canadian Security and Intelligence Act (CSIS)
* Centralizing mandatory reporting of online child pornography offences through the RCMP’s National Exploitation Crime Centre to ensure stronger requirements for internet service providers for reporting excesses would help but continuing vigilance to ensure that is happening must be provided. Not requiring judicial authorization in reports to law enforcement is necessary to expedite police response in cases where an offence is clearly evident. The same criteria should be applied to CSIS to ensure more timely access to relevant information that could help mitigate the threat of online violence extremism. For this process to take 4-6 months, as it does now, seriously diminishes their capacity to be effective.
Again, thank you for the opportunity to participate in this timely discussion. If provision is made for appearance via zoom before the committee to submit a statement I would appreciate the opportunity.
References:
Barlow, S. (2017b, October 24) Getting hooked on gaming stocks. The Globe and Mail. P.B6.
Barlow, S. (2017a, October 14) As investing theme video games score big. The Globe and Mail. P.B3.
Bourrie, M. (2016). The Killing Game: Martyrdom, Murder and the Lure of ISIS. Toronto, ON: Harper Collins Canada
Condis, M. (2018) Gaming Masculinity: Trolls, Geeks and the Gendered Battle for Online Culture. Iowa City, IA: University of Iowa Press.
Davis, D. (2010). The TRUTH About Cell Phone RADIATION: What the INDUSTRY has Done to Hide It, and How to PROTECT Your FAMILY. New York: Dutton.
Doak-Gebauer, C. (2019) THE INTERNET:ARE CHILDREN IN CHARGE? Tellwell, Canada.
Dyson, R. A. (2000). MIND ABUSE: Media Violence in an Information Age. Montreal: Black Rose Books.
Dyson, R.A. (2021). MIND ABUSE: Media Violence and its Threat to Democracy. Montreal: Black Rose Books. UT Press, AMAZON
Grossman, D. (2016). ASSASSINATION GENERATION: Video Games, Aggression and the Psychology of Killing. Boston, MA, Little, Brown & Company.
Klein, Seth. (2021). A Good WAR: Mobilizing Canada For The Climate Emergency. Amazon: U.S.
United States Court of Appeals for the District of Columbia. EHT Victorious in Federal Court Case Against FCC on Wireless Radiation Limits. August 14, 2021.
Putin Approves Ratification of CIS Agreement on Cyber Security Cooperation
TASS: Russian News Agency | 1 July 2021
“MOSCOW, July 1. /TASS/. Russian President Vladimir Putin signed a bill on ratifying an agreement on cooperation between the Commonwealth of Independent States (CIS) countries in the fight against cyber crimes.
The document was published on the official portal of legal information.
The agreement was inked in September 2018 at the meeting of the CIS Heads of State Council in Dushanbe, Tajikistan. The document is aimed at establishing modern legal mechanisms for practical interaction of Russian competent authorities with colleagues from other CIS countries for effectively preventing, detecting, thwarting, investigating and solving cyber crimes.
This involves cooperation in the exchange of data on impending or committed crimes and persons behind them, responding to the calls for assistance in providing data that can facilitate the investigation as well as coordinated operations.”
Read more
The agreement defines such terms as malware, data system, unauthorized access to information. The document also establishes that the parties, in line with their national legislation, recognize as criminal offenses the destruction, blocking, modification or copying of data obtained in an unauthorized way, the creation of computer viruses, violation of the rules for using a computer system, if this entailed grave consequences as well as theft by changing the data stored in the system.
Furthermore, the agreement covers such acts as the distribution of pornography, extremist materials on the Internet, the creation of software and hardware for hacking computers and copyright violation.
Under the agreement, the sides will exchange information about impending or committed crimes and ways to prevent them, perpetrators as well as coordinate joint activities. It also stipulates internships for specialists, seminars, the creation of data protection programs, the exchange of scientific publications and regulatory legal acts.
Link: https://tass.com/politics/1309447
Defense Official Testifies About DOD Information Technology, Cybersecurity Efforts
Terri Moon Cronk | DOD News | 30 June 2021
“President Joe Biden’s interim National Security Strategic Guidance and Secretary of Defense Lloyd J. Austin III’s priorities drive key areas on the Defense Department’s cloud, software network modernization, cybersecurity work, workforce, command-and-control communications and data, DOD’s acting chief information officer said.
John Sherman told the House Armed Services Committee’s panel on cyber, innovative technologies and information systems that cloud computing is a critical step for the enterprise. “We’ve made cloud computing a fundamental component of our global [information technology] infrastructure and modernization strategy,” he said yesterday. “With battlefield success increasingly reliant on digital capabilities, cloud computing satisfies the warfighters’ requirements for rapid access to data, innovative capabilities, and assured support.”
The DOD remains committed in its drive toward a multi-vendor, multi-cloud ecosystem with its fiscal year 2022 cloud investments, which represent more than 50 different commercial vendors, including commercial cloud service providers and system integrators, he added.
And the DOD’s ability to leverage that technology has matured over the last several years, and it’s driving hard to accelerate the momentum even more in that space, Sherman said.”
Read more
“Software capabilities and networks are also critical to our success,” he said. “[We] will release a software modernization strategy later this summer that builds on already developed guidance. We are dedicated to delivering resilient software capability at the speed of relevance. The FY [20]22 budget includes investments to enable software modernization with cloud services as the foundation to fully integrate the technology process and people needed to deliver next-generation capabilities.”
In the meantime, the COVID-19 pandemic crisis changed the way the DOD works, Sherman said. “The department deployed a commercial-based collaboration capability to enable the rapid transition to remote work. While cloud access and remote work introduces a significant burden to the DOD networks, we continue to deploy secure and agile solutions. All of these efforts must address cybersecurity from the start. The secretary previously discussed the department’s investments in cybersecurity and cyberspace operations that will maintain the momentum of our digital modernization strategy,” he noted.
The fiscal 2022 DOD cybersecurity budget maintains the enhanced funding levels established in fiscal 2020 and 2021 for key-enterprise cybersecurity capabilities that will enable the DOD to advance its focus on zero trust and risk management and drive its newly advanced investments to enhance resilience and cyber defenses, the acting CIO said.
“We take our responsibilities in this area very seriously, given the threat landscape we face,” Sherman said. “While all divisions on our CIO team support warfighting, it is command, control and communications that might be most closely linked to the warfighter in the ground, sea, air and space domains. The critical capabilities in this portfolio, positioning navigation and timing, electromagnetic spectrum enterprise, and 5G are key priorities for the enterprise — especially as we face threats from our near-peer competitors.”
The DOD often says that data is the ammunition of the future, he said, adding, “The department has prioritized ensuring the timely, secure and resilient access to data needed for military advantage in all-domain operations. While data management is not directly tied to specific program elements in the fiscal 2022 budget request, we are identifying, assessing and tracking our data-related investments as part of the budget certification process that I lead.”
Link: https://www.defense.gov/Explore/News/Article/Article/2678059/defense-official-testifies-about-dod-information-technology-cybersecurity-effor/
House Panel Approves DHS Bill with ‘Historic’ Funding for Cybersecurity
Mariam Baksh | Nextgov | 30 June 2021
“A bill to fund the Department of Homeland Security now heads to the full Appropriations Committee in the House after passing unopposed through the related subcommittee with $2.42 billion for the Cybersecurity and Infrastructure Security Agency.
“As the nature of the threats facing the country changes, the missions and investments of the Department of Homeland Security must quickly adapt and respond. This bill makes historic investments in cyber and infrastructure security,” said Rep. Lucille Roybal-Allard, D-Calif., chairwoman of the Appropriations subcommittee on homeland security.
The bill approved Wednesday—which includes funding to deal with contentious immigration issues and a host of other things such as defending the U.S. against Russian aggression in the Arctic—makes $52.81 billion available to DHS in discretionary funding, $934 million more than for 2021. Roughly a third of that increase—$397.4 went to boosting CISA, DHS’ newest agency.
After the committee released a draft of the bill Tuesday, Rep. Jim Langevin, D-R.I., a member of the Cybersecurity Solarium Commission, thanked Roybal-Allard for CISA’s funding level in the bill, which is also $288 million more than President Joe Biden requested for the agency.
“If we are going to stop the current wave of ransomware and prevent another SolarWinds-like hack, Congress must step up to the plate and adequately fund CISA,” Langevin said. “I’m thrilled that the Appropriations Committee is allocating $2.42 billion for CISA, our nation’s premier cybersecurity agency, in line with the Solarium Commission’s recommendation. For months, I’ve been calling for Congress to allot more resources for CISA, and I’m so grateful to Chairwoman Roybal-Allard for her abiding commitment to shoring up our nation’s cyber defenses.”
Read more
During the markup, Roybal-Allard noted that while immigration can be a difficult place to find consensus, Republicans and Democrats agreed more than they disagreed on other aspects of the bill. With recent high-profile cyberattacks and ransomware plaguing the country, cybersecurity was likely one of those areas.
“As recent events like the Colonial Pipeline hack have demonstrated, it is obvious that we must do more to secure our nation’s cyber infrastructure,” Appropriations Committee Chair Rosa DeLauro, D-Calif., said. “That’s why this bill’s investments in preventing cyber attacks and rooting out cyber intrusions are so critical.”
Link: https://www.nextgov.com/cybersecurity/2021/06/house-panel-approves-dhs-bill-historic-funding-cybersecurity/182690/
ASEAN Cyber Challenge in the Spotlight With New Center
Prashanth Parameswaran | The Diplomat | 30 June 2021
“One of the items of note to have come out of the recently concluded virtual ASEAN Defense Ministers Meeting (ADMM) on June 15 was the formalization of a cyber center of excellence based in Singapore. While the development itself was not surprising, it nonetheless spotlighted the continued significance of cyber security as a defense issue of importance for Southeast Asian states, as well as some of their key partners.
Cybersecurity has been an increasing focus for Southeast Asian states as well as ASEAN as a grouping in the context of the region’s attempts to balance the opportunities afforded by the digital economy with the challenges posed by the increasing sophistication of cyber threats in an increasingly networked world and their links to other challenges such as terrorism.
Specifically, these issues have been recently addressed by the ADMM, widely characterized as the premier defense institution within ASEAN. Recent years have seen the institutionalization of a new ADMM-Plus cyber security working group in 2016 and the establishment of new bodies like the ASEAN-Japan Cybersecurity Capacity Building Center, which was announced during Thailand’s 2019 ASEAN chairmanship.”
Read more
We saw this focus reinforced yet again at the latest ADMM meeting hosted by current ASEAN chair Brunei. The Bandar Seri Begawan Declaration adopted by the ADMM on June 15 on promoting a “future-ready, peaceful and prosperous ASEAN” noted some of the advances made in the cyber domain and also included some measures designed to boost future activity in this space.
One of the concept papers adopted by the meeting concerned the establishment of a new cyber center of excellence. The center, formally called the ADMM Cybersecurity and Information Center of Excellence, would be based in Singapore and would be designed to “promote cooperation on cybersecurity and information within the defense sector, enhance multilateral cooperation amongst ASEAN defense establishments against cyber attacks, disinformation, and misinformation.”
The idea of such a center being hosted in Singapore comes as little surprise. Singapore has already been working to establish itself as a leader within ASEAN on cyber issues through a series of initiatives over the years covering various areas including capacity building and dialogues. This includes the formal announcement of the ASEAN-Singapore Cybersecurity Center of Excellence back in 2019, at the fourth ASEAN Ministerial Conference on Cybersecurity.
Few details have been publicized thus far about the new center’s role within the ADMM. Singapore Defense Minister Ng Eng Hen cited the development as an example of the importance of finding new avenues for strategic dialogue and practical cooperation to deal with current and new security challenges and promote sound analysis and information sharing even amid the coronavirus pandemic, which contributed to the cancellation of this year’s iteration of the Shangri-La Dialogue.
s specifics become clearer, the development of the center of excellence will warrant close scrutiny. This includes the core areas on which it looks to make progress, which include research, training, and information sharing, as well as how the center fits in with already existing bodies and other existing collaborative endeavors. A case in point was the adoption of a concept paper at the ADMM meeting on the establishment of ASEAN Cyber Defense Network to link cyber defense operation centers of member states. Such markers and others like them will offer more of a sense of how the institution evolves as part of wider efforts by Southeast Asian states to manage cyber challenges over the next few years.
Link: https://thediplomat.com/2021/06/asean-cyber-challenge-in-the-spotlight-with-new-center/
UK Cyber Security Council Launches Opening Initiatives
James Coker | Infosecurity | 30 June 2021
“The UK Cyber Security Council has launched its first two initiatives as part of its remit to boost professional standards in the cyber industry.
The council, which started work as an independent body on March 31 2021, has invited 16 members of the Cyber Security Alliance to apply for a role in determining the terms of reference for two new committees: a Professional Standards & Ethics Committee and a Qualifications & Careers Committee. The Cyber Security Alliance is a group of organizations that the UK government established in 2019, from which the council was set up.
The two new committees will be involved in helping ensure a common set of standards are adopted throughout education and training interventions related to cybersecurity. This represents the first stage to provide a focal point through which industry and the professional landscape can advise, shape and inform national policy on cybersecurity professional standards.”
Read more
The council added that while representatives of the Cyber Security Alliance will develop the terms of reference, the committees will be made up of members of the council. Membership is only open for expressions of interest at this stage, with the application process begins shortly, and new members join from September.
Additionally, the council has announced it will be working on an initial mapping of CyBOK’s Qualifications Framework onto a public-facing Career Pathways Framework.
Don MacIntyre, the interim chief executive of the UK Cyber Security Council, commented: “While the Council is uniquely supported by the UK Government and has a Board of experienced industry professionals, it will be through its members that the UK Cyber Security Council will play a central role in driving the cybersecurity industry forwards. We don’t have the luxury of starting with something ‘easy’: professional standards and qualifications and careers are the two stand-out issues facing the profession, so we’re going to hit the ground running. There will never be a better opportunity for the profession to influence its own direction and development than joining the council and getting involved with these first two committees.”
All 16 members of the Cyber Security Alliance have also been offered the honorary status of Founding Member of the Council in recognition of their efforts in developing the body. However, they will still need to apply for regular member status to contribute to the council’s activities going forward.
Link: https://www.infosecurity-magazine.com/news/uk-cyber-security-council-opening/
Incremental Progress or Circular Motion? – The UN Group of Governmental Experts (UNGGE) Report 2021
Making progress on complex issues in a forum like the United Nations with 193 state parties and a consensus decision-makingOne of the most difficult problems that the GGEs faced was the question of how the conduct of states in cyberspace related to international law, including international humanitarian law. A major accomplishment of the 2013 GGE was the affirmation that international law, including the UN Charter, applied to cyberspace. It was soon apparent however that this affirmation had not resolved underlying differences over the interpretation of how international law applied to the cyber conduct of states, particularly in the context of international security. Disagreement over this question had been the proximate reason for the failure of the previous GGE to reach a consensus outcome in 2017. The place of international humanitarian law (aka the law of armed conflict) in this new realm of military operations was especially contentious. Some states sought a confirmation that international humanitarian law would cover state cyber operations, whereas others resisted the notion arguing that this could sanction treating cyberspace as a legitimate domain for armed conflict., if the progress achieved appears more of a circular than linear nature.
This situation is evident in the final report of the UN Group of Governmental Experts (GGE) on “Advancing responsible State behaviour in cyberspace in the context of international security” adopted at the group’s fourth and final meeting May 28, 2021.i The GGE which operated in the 2019-2021 timeframe with 25 nationally appointed “experts” was the most recent in a series of six such GGEs that have been organized by the UN since the turn of the century.ii Two of these (2003-2004 and 2016-2017) failed to achieve consensus and didn’t produce a substantive report. Four were able to agree on consensus reports in 2010, 2013, 2015 and the most recent in 2021. The chief aim of all these GGEs was to develop “norms of responsible state behaviour in cyberspace” as part of the effort to determine how the potent technology of the Internet and related computer networks could be managed in light of the UN’s goal of maintaining international peace and security.
This dispute surfaced in the proceedings of the UN Open-Ended Working Group (OEWG) on “Developments in the field of Information and Telecommunication in the context of International Security” which ran in parallel with the GGE in the 2019-2021 timeframe and was able to arrive at a consensus report at its final meeting in March 2021.iii This result was only achieved by dividing the report into two sections: a section that had consensus approval and a “Chairman’s Summary” which contained elements that were not able to command consensus agreement and had to be issued in a non-binding manner under the Chairman’s own authority. The international humanitarian law issue fell victim to this cut being relegated to the Chairman’s Summary despite the support of many states and an energetic plea by the International Committee of the Red Cross to preserve a positive reference to it in the main report. The ICRC argued that acknowledging that international humanitarian law would apply to an armed conflict occurring in cyberspace should in no way be construed as condoning the militarization of cyberspace or legitimizing cyber warfare. In the event this construction was not sufficient to persuade skeptical states to accept the ICRC’s proposed text in the consensus report.
Read more
The fate of this issue in the OEWG is relevant to that of the GGE as observers had hoped that the latter forum (operating under a very similar mandate to that of the OEWG) might be able to provide “value added” to the OEWG proceedings by clarifying this crucial relationship between state conduct and international law. Part of this hope rested on the smaller grouping of the GGE and its more private deliberations. While the issue was addressed in the GGE report it was not resolved. Essentially the question was kicked down the road by the GGE. The key sentence reads: “The Group recognized the need for further study on how and when these principles [IHL] apply to the use of ICTs by States and underscored that recalling these principles by no means legitimizes or encourages conflict”. iv As much in the way of offensive cyber operations conducted by states, which the GGE refers to as “malicious activity”, happens below the threshold of armed conflict the international community is not really any further along in its understanding of what legal constraints apply to these operations.
This gap is all the more worrisome when one considers the major growth in damaging and disruptive offensive cyber operations carried out by states and/or non-state actors in the past couple of years that the GGE and the OEWG have been functioning. This increased level of threat is acknowledged by the GGE at several points in its report: “Incidents involving the malicious use of ICTs by States and non-state actors have increased in scope, scale, severity and sophistication”; “The Group underlines the assessment of the 2015 [GGE] report that a number of States are developing ICT capabilities for military purposes and that the use of ICTs in future conflicts between States is becoming more likely”; “The Group notes a worrying increase in States’ malicious use of ICT-enabled covert information campaigns to influence the processes, systems and overall stability of States.”; “Harmful ICT activity against critical infrastructure that provides services domestically, regionally or globally…have become increasingly serious.”; “The COVID-19 pandemic has demonstrated the risks and consequences of malicious ICT activities that seek to exploit vulnerabilities in times when our societies are under enormous strain”; “New and emerging technologies expand the attack surface, creating new vectors and vulnerabilities that can be exploited for malicious ICT activity”. After such a litany of rising threats the Group’s conclusion that “Such activity can pose a significant risk to international security and stability, economic and social development, as well as the safety and well-being of individuals” comes across as understated and anticlimactic.
In the face of these burgeoning threats what defences has the GGE to offer? It basically can only revert to the eleven norms of responsible state behaviour agreed as part of the 2015 GGE. A rather limp injunction is directed at those responsible: “States are called upon to avoid and refrain from the use of ICTs not in line with the norms of responsible state behaviour”. vi The impression left in reviewing the chief body of the report, which consists of reproducing the 11 norms of the 2015 GGE with some annotation, is that matters have not progressed much beyond the norms agreed six years ago. While the GGE claims that it has “developed additional layers of understanding to these norms” these layers seem rather thin and even threadbare. Frequently, the report simply offers up a tentative recommendation for states to consider further action in realizing the normative goals. For example, in a section on the issue of attribution, the report “…recommends that future work at the UN could also consider how to foster common understandings and exchanges of practice on attribution”.vii The task is passed on to some unspecified body at some indeterminate future point in time.
Similarly, in a section devoted to cooperation to counter terrorist or criminal use, the report’s advice is that “States may need to consider whether new measures need to be developed in this respect”. viii The report notes the utility of common templates to facilitate requests for assistance and the response to them, but then merely states: “Such templates could be developed at the bilateral, multilateral or regional level”ix. On the sensitive issue of vulnerability disclosures (and the unmentioned black market in “zero-day” cyber exploits in which government buyers have driven prices up exponentially) the report again manages only a convoluted and theoretical response: “At the national, regional and international level, States could consider putting in place impartial legal frameworks, policies and programmes to guide decision making on the handling of ICT vulnerabilities and curb their commercial distribution as a means of protecting against misuse that may pose a risk to international peace and security or human rights and fundamental freedoms”.x Too often the report’s recommendations have a diffuse, aspirational quality of the “somebody might consider doing something about this at some point” variety.
The GGE like the OEWG before it, gives only a brief, ritual nod to the contribution that other stakeholders (“the private sector, civil society, and the technical community”) could make to inter-state dialogues.xi The GGE in its consideration of the existing norms also fails to recognize the positive role that accountability mechanisms for implementation could play in incentivizing states to align their cyber practices with the “norms of responsible behaviour” they have endorsed. As with the OEWG, the GGE has not really advanced tangible action to curb malicious cyber activity. Regrettably, the GGE efforts seem to have yielded more circular motion than real progress. Translating the 2015 norms from the status of declaration to one of implementation remains, six years after they were agreed, largely unfinished business for the UN.
Link: https://ict4peace.org/wp-content/uploads/2021/06/GGECyber2021Circular-Motionf.pdf
Cyberattacks Grind Hanford Nuclear Energy Workers’ Benefit Program to a Halt
Patrick Malone | The Seattle Times | 10 May 2021
“Cyber attacks on the U.S. government have abruptly paused processing of benefit applications for workers who were sickened while working on nuclear weapons programs at Hanford and other Department of Energy sites, delaying aid to some dying workers, according to advocates.
Without warning, advocates from the Alliance of Nuclear Workers Advocacy Group received notice late last Friday that effective Monday, a vital component of the Energy Employees Occupational Illness Compensation Program would be offline for two to four months.
The Radiation Dose Reconstruction Program databases’ sudden hiatus could delay approval of new benefits for groups of workers who believe they’ve been exposed to workplace hazards.
Among them are more than 550 workers from Hanford, a mothballed plutonium processing site in Richland, who were potentially exposed to radiation and toxins when they were provided leaky respirators, according to a Seattle Times investigation last year.
Those workers are seeking inclusion in the federal benefits program administered by the Department of Labor. The National Institute of Occupational Safety and Health plays an instrumental role in determining eligibility.”
Read more
Hanford, born in secrecy during World War II in a rush to develop the first atomic bomb, processed the plutonium fuel for nuclear weapons for four decades, a process that fouled the 580-square-mile site with radioactive waste and toxic vapors that sickened and killed many workers.
Washington’s U.S. Sen. Patty Murray and Rep. Adam Smith, both Democrats, sponsored legislation in response to The Times investigation that would expand benefits to include the Hanford cleanup crew who were given faulty respirators and other nuclear workers across the country who aren’t yet eligible.
Others who could be affected are some 1,378 individual workers across the country currently applying for assistance, and those with recent terminal diagnoses, who normally would be eligible for benefits awarded as quickly as a day after application. Those benefits can be worth hundreds of thousands of dollars.
“Terminally ill workers often do not have 2 to 4 months to live,” Terrie Barrie, ANWAG founder, wrote in a Monday, May 3, letter to NIOSH director to Dr. John Howard. “Will they no longer have the option to have their claim expedited so that they can receive the medical and financial benefits before they die?”
The source and nature of the cyberattacks are unclear, but in a May 4 letter to ANWAG, Howard said that an ongoing review of the energy workers’ compensation databases “identified very significant concerns about the cybersecurity integrity of the Program’s claimant database,” forcing an immediate and secret shutdown of the claims process.
Giving advance notice of efforts to address the cyber-vulnerabilities “might have increased the imminent threat to the Program’s databases,” Howard wrote.
“The recommendation by information technology specialists this past week to take the system ‘off-line’ without advance notice (to protect the at-risk databases) led to our having to announce the Initiative without advance notice.”
NIOSH is steward to vast troves of private information on people seeking benefits, including Social Security numbers, medical and financial data.
Cyberattacks against federal agencies in recent months have exposed broad vulnerabilities. The cyber incursion of SolarWinds software discovered last December, which the intelligence community has linked to Russian hackers, was “one of the most widespread and sophisticated hacking campaigns ever conducted against the federal government and private sector,” according to the Government Accountability Office.
NIOSH is not among the federal agencies listed as victims of that breach. However, the U.S. Department of Energy’s National Nuclear Security Administration, which develops and builds nuclear weapons, and its Richland Field Office, which has oversight responsibility for Hanford, were among the federal agencies stricken. The Energy Department reported the breach had reached its business operations databases, but not sensitive national security data.
Congress adopted the Energy Employees Occupational Illness Compensation Program in 2000 to address a grim legacy of the U.S. nuclear weapons program, an acknowledgement that, beginning in the 1940s with the Manhattan Project, many workers were exposed to radiation and other health hazards that would later sicken and sometimes kill them.
Workers or their surviving heirs can be eligible for up to $250,000 and in many cases medical care if they are determined to be ill due to toxic workplace exposures. Up to $150,000 in benefits is available to those who contract cancer based on workplace radiation exposures.
The program relies on historical information such as what hazards were present in a particular worksite, how much time a worker spent there and the types of illnesses the workers developed, data that helps create a dose reconstruction to assess exposure. Groups of workers can petition NIOSH for inclusion, or individuals can pursue claims.
The dose reconstruction program is essential to determining whether workplace exposures are the likely cause of illnesses, a requirement to qualify for benefits. Even before the sudden delay in processing claims, some group petitions languished for up to a decade.
The suddenness of the shutdown caught sickened nuclear workers unaware. D’Lanie Blaze, an ANWAG advocate for nuclear workers in Southern California, said she’d spent the week fielding calls from federal contractors tasked with setting up benefit eligibility appointments for applicants, and broke news of the standstill to them because the agency hadn’t notified them.
“I find it unacceptable that NIOSH’s own schedulers hadn’t been notified of these disruptions,” Blaze said. Now she is bracing for panicked calls from applicants as news of the delay reaches them. “If they’re in the program, they’re on borrowed time because they have cancer or other conditions. To have these delays further impact claimants is unacceptable.”
NIOSH is in the process of developing an interim manual “paper process” as a workaround to the dose-reconstruction computer system, Howard told worker advocates in his letter.
“The United States government first harmed these dedicated individuals who worked in defense of our country by not adequately protecting them from radiation exposure,” Barrie wrote to the NIOSH director. “Now the government is harming them once again.”
Link: https://www.seattletimes.com/seattle-news/times-watchdog/cyberattacks-grind-hanford-nuclear-energy-workers-benefit-program-to-a-halt/
Defense is a whole lot harder than offense in this game. And if you catch the hackers, what are the penalties? The heaviest weapon would be economic sanctions against another country, if you could prove that the hackers were government agents. And how far have economic sanctions worked in other cases? Not an impressive record of success. The Russians offered to negotiate treaties a while back but nobody took up their offer. It’s easy to understand why not, but look where things are headed now!
The Cybersecurity 202: A Group of Industry, Government and Cyber Experts have a Big Plan to Disrupt the Ransomware Crisis
Tonya Riley with Aaron Schaffer | The Washington Post | 29 April 2021
“A task force of more than 60 experts from industry, government, nonprofits and academia is urging the U.S. government and global allies to take immediate steps to stem a growing global crisis of cyberattacks in which hackers seize computer systems and data in exchange for a ransom.
The group, which issued a report today, says swift, coordinated action can disrupt and deter the growing threat of cyberattacks that use ransomware, a malicious software that locks up computer systems so that criminals can demand ransom in exchange for access.
“We’re seeing critical parts of the economy being hit by ransomware, including, for example, health care in particular,” says task force co-chair Megan Stifel, executive director of Americas at the Global Cyber Alliance. “When you start to see a broad scale of victims across multiple elements of the economy being hit there can ultimately, if not abated, be catastrophic consequences.”
Read more
“Hackers have hit thousands of victims, including critical services such as hospitals and local governments, during the pandemic. This week alone hackers hit police departments in Maine and Washington D.C. In the case of the latter, hackers leaked sensitive documents, a tactic that is becoming increasingly common in ransomware attacks. In February, Secretary of Homeland Security Alejandro Mayorkas called the uptick in the hacks an “epidemic;” he speak at an event rolling out the group’s report today.
The growing threat of ransomware to those critical services is what pushed the nonprofit Institute for Security and Technology to form the task force in January, says IST CEO and Ransomware Task Force co-chair Philip Reiner.
“I think that the realization of watching those who work on cyber security, watching folks really scrambling to collaboratively staunch the tide of these kinds of attacks — It struck us that there needed to be a coordinated, comprehensive approach taken to really get after this and that piecemeal efforts weren’t going to be sufficient,” says Reiner.
The report from the Ransomware Task Force includes 48 recommendations for policymakers and industry to disrupt the ransomware ecosystem. The recommendations focus on five key areas: international cooperation; coordination between the private and public sector; a whole of government approach including an interagency task force; establishing response and recovery support for victims and stronger oversight of the cryptocurrency industry used by criminals for payments.
Government organizations with representatives on the task force include FBI, the Cybersecurity and Infrastructure Security Agency, the Secret Service, the National Governors Association and the New York Department of Financial Services
The report calls on the White House to stand up an interagency group to combat the problem. It also urges greater collaboration with the private sector and the establishment of a private-industry led ransomware incident sharing network.
Lawmakers and U.S. intelligence officials have hammered on the need to fix the gap in information sharing between the private and public sector in the wake of the SolarWinds attack. The massive Russian hacking campaign that infiltrated nine federal networks could have gone undetected for months longer had private cybersecurity firm FireEye not notified SolarWinds and the government.
“What’s key for the private sector here is not only do we have a national strategy that’s well resourced and that allows for privatization, we have the ability to share information among each other and with law enforcement and with governments,” says co-chair Kemba Walden, assistant general counsel for Microsoft’s digital crimes unit. “I think transparency goes a long way, especially if you’re part of the security community, to disrupt and to take action to operationalize that information.”
The task force also recommends greater coordination with foreign governments and international law enforcement to take out hacker infrastructure and shut down safe havens.
That pressure could come in the form of economic and trade sanctions — like those recently launched against Russian companies and diplomats by the Biden administration — or other means of withholding assistance or publicly calling out governments harboring hackers.
The changes will take legislative action.
Some of the report’s proposed solutions require action by lawmakers and government officials. They’re already considering better information sharing between the private and public sector following the SolarWinds attack, and they recently introduced legislation to increase emergency response funding for cyberattacks.
The House Committee on Homeland Security’s cybersecurity subcommittee will hear next week about the report’s recommendations when it hosts a hearing to combat ransomware. The hearing will feature Stifel, John Davis of Palo Alto Networks as well as National Association of State Chief Information Officers president Denis Goulet, according to a source familiar with plans for the hearing.
Members of the task force expressed optimism about steps the U.S. government is already taking, including DHS’s plans for an accelerated ransomware effort as well as the Justice Department’s recent creation of a task force addressing ransomware.
But they say more work is needed. For instance, the Treasury Department could step in to heighten oversight of cryptocurrency markets using existing anti-money laundering and terrorism laws.
Members of the task force were optimistic the recommendations could make a big impact — if officials act immediately and view the report as a whole.
“The report is written with the idea that you have to take all of the actions in order to have an impact. So there are a lot of moving parts. Taking that and actioning it all at once and quickly to keep up with the pace of the crime. I think that’s going to be the biggest challenge that that I think it’s over,” says Walden. “Maybe I’m an optimist, but I think we can meet that challenge.”
The keys Biden calls for infrastructure improvements to combat cyberattacks in his first joint address to Congress.
He called for modernizing the power grid, which is currently “vulnerable to storms, hacks and catastrophic failures” as well as improvements to public education to help build out the workforce.
President Biden’s first 100 days have been defined by two major cyberattacks, one of which the U.S. responded to with sanctions against Russia as Biden noted in his speech.
The president also touched on America’s need to partner with allies to address growing threats.
“No one nation can deal with all the crises of our time alone – from terrorism to nuclear proliferation to mass migration, cybersecurity, climate change – and as we’re experiencing now, pandemics,” he said.
A top Justice Department official defended the use of warrants to remove malware.
John Demers, the assistant attorney general of the Justice Department’s national security division, said the government is using the authority “judiciously” and on a case-by-case basis. The comments come weeks after Justice announced an operation to remove back doors on hundreds of U.S.-based servers that were infected by hackers who exploited weaknesses in Microsoft Exchange software.
Asked about DOJ’s development of policies for removals, Demers said “now that we’ve had this experience, that’s the kind of discussion that we’re having now internally.”
“I don’t know that we see a need for new legislation” to give the Justice Department additional investigative powers, Demers said. “By and large, we have what we need.”
The White House endorsed a water infrastructure bill’s cybersecurity provisions.
The endorsement comes in the wake of a February cyberattack on a Florida water treatment plant, NextGov’s Mariam Baksh reports. The bill would provide $25 million in annual grants as part of a clean water infrastructure program that would allow recipients to use the money to patch holes in their cyber defenses.
The legislation “promotes resiliency projects to address the impacts of climate change and makes explicit that cybersecurity projects are eligible for key programs,” the White House said in the statement.
Hackers posted personal information about D.C. police officers.
Hundred-page dossiers on five current and former D.C. police officers were posted, NBC News’s Kevin Collier reports. The files include polygraph results and other personal information, and come as hacks-for-ransom, such as this one, reach a fever pitch across the United States.
The FBI is investigating the incident. A group calling itself Babuk has asked for a ransom in exchange for not publishing the stolen data.
A former police officer whose data was leaked said that the information was authentic and he had not been contacted by the police department. A D.C. police spokesperson did not respond to a question from Collier about the five officers, but pointed to a YouTube video of acting chief Robert J. Contee III.
Link: https://www.washingtonpost.com/politics/2021/04/29/cybersecurity-202-group-industry-government-cyber-experts-have-big-plan-disrupt-ransomware-crisis/
U.S. Nuclear Modernization: Security & Policy Implications of Integrating Digital Technology
8 December 2020 | NTI
“An expansive, complex undertaking to modernize the United States’ nuclear bombs and warheads, their delivery systems, and the command, control, and communications infrastructure around them is underway. It is a project that carries the potential for great benefits through an increase in digital systems and automation, as well as the addition of machine learning tools into the U.S. nuclear triad and the supporting nuclear weapons complex. But it also is one that carries significant risks, including some that are not fully understood. If it does not take the time to protect the new systems integrated with some of the deadliest weapons on earth from cyberattack, the U.S. government will be dangerously outpaced in its ability to deter aggressors.”
Given the stakes, why take on new risks at all? The reason to integrate digital technologies into U.S. nuclear weapons systems is clear: this is the first significant upgrade of U.S. nuclear weapons systems in nearly 40 years, and the old systems need replacing. The most efficient way to update the full nuclear triad of bombers, submarines, and ground-based missiles, as well as the bombs, warheads, and command, control, and communications network, is to use today’s technology, including digital tools. From digital displays on bomber aircraft to advanced early-warning sensors and machine-learning-enabled nuclear options planning tools, this U.S. nuclear weapons recapitalization, like past modernizations, will be a product of its time.Read more
Once the process is complete, the modernized U.S. nuclear triad will rely on more digital components and will include limited automation. Machine learning applications will provide some essential functions relevant to nuclear decision-making, and analog systems at or beyond their expected end of life will largely be replaced.
In the recent past, the Departments of Defense and Energy have struggled to respond to cybersecurity and supply chain threats to major weapons development programs. In many cases, efforts to address cybersecurity have lagged behind the acquisitions process, creating challenges for protecting against vulnerabilities in new or modified weapons systems. In addition, outside pressures often place a premium on meeting ambitious cost and schedule commitments, sometimes at the expense of performance and reliability, even in the face of evolving cybersecurity risks and challenges presented by new tools such as machine learning. Risks to all digital and machine learning systems are myriad: attacker intrusions, lack of access to critical systems amid a crisis, interference with physical security systems that protect nuclear weapons, and inaccurate data and information, among others. All these risks, if not addressed, could undermine confidence in a nuclear weapon or related system.
Integrating new technologies with old is a perpetual engineering challenge, but for the U.S. nuclear deterrent, it is one with implications that go far beyond the significant risks posed by cyber threats and digital malfunctions. Effective nuclear deterrence requires confidence that nuclear forces will always be ready if needed but never be used without proper authorization.
If the new digital systems integrated into U.S. nuclear weapons are not protected from escalating cyber threats, or if added automation cannot be trusted, the high confidence U.S. leaders (as well as adversaries) place in nuclear weapons systems will erode, undermining nuclear deterrence and, potentially, strategic stability.
Given the multiple risks associated with today’s nuclear modernization program, NTI drew on open-source information, including budget requests, official statements, and press reports, to determine how digital systems and automation are included in the nuclear weapons enterprise modernization and to develop recommendations for military and civilian leaders in the Departments of Defense and Energy, as well as those in oversight roles in the executive branch and Congress. It is crucial—now, before it becomes an even more difficult task to secure the modern systems, and before they are deployed or operational—that the technical risks posed by new technologies be recognized and mitigated. To ensure that as long as the United States has nuclear weapons, they continue to be safe, secure, and effective, it is important that as U.S. nuclear policies evolve, they take into account the benefits and risks of digital and advanced tools to the modernized nuclear deterrent.
Recommendations
This report provides three recommendations:
Read the full report here.
Link: https://www.nti.org/analysis/reports/nti-modernization-report-2020
This writer actually tells us that we will be better off with all these improvements in nuclear weapons. What a horrible thought! Just get rid of them, stupid.
Why were they issued leaky respirators? For Covid treatment or because they have to stop breathing regular air when they are in a particularly dangerous area? This article mentions aid to dying workers. Are people still dying from jobs they performed decades ago or what?
Enough is enough. Here’s what we should do to defend against the next Russian cyberattacks
By Alex Stamos, Washington Post, Dec. 15, 2020
Alex Stamos is the director of the Stanford Internet Observatory and the former chief information security officer of Yahoo and Facebook.
The details are still trickling in, but it seems possible that the latest Russian cyberattacks against the Departments of Homeland Security, Treasury and State; the National Institutes of Health; and possibly dozens of companies and departments will turn out to be one of the most important hacking campaigns in history.
The current reporting suggests that the Russian Foreign Intelligence Service (SVR), long considered Russia’s most advanced intelligence agency in cyber operations, managed to compromise the servers of an important vendor of information technology software and implant a back door. This company, SolarWinds, services tens of thousands of corporate and government clients, and its products naturally have access to critical systems. Since March, we’ve now learned, the SVR has been able to use this toehold to jump into the networks of a variety of highly sensitive organizations. I expect the true impact of the overall campaign won’t be known for months or years as thousands of companies scramble to determine whether they were breached and what was stolen.
While we don’t have all the details, it is already clear that something is wrong with how our country protects itself against the hackers working for our adversaries in Russia, China, Iran and North Korea. As the Biden administration puts together its plan to secure the United States against these kinds of attacks, and Congress considers how to update the existing bipartisan cybersecurity consensus, I offer three initial fixes.
First, we need to build a cyberspace equivalent of the National Transportation Safety Board. Such an agency would track attacks, conduct investigations into the root causes of vulnerabilities and issue recommendations on how to prevent them in the future. As things stand now, our only public account of this latest attack will come from the class-action lawsuits filed by lawyers acting on behalf of affected companies and shareholders. When I worked for Yahoo, I saw myself what happened after the company was attacked by the Russians. Legal teams produced dozens of depositions and reviewed hundreds of thousands of documents; then they collected their million-dollar payouts, and that was that. No public documentation resulted; no useful lessons were learned.
We should create a mechanism to handle cyberattacks the same way we react to serious failures in other complex industries; the NTSB offers a useful model. While voluntary transparency from technology companies such as FireEye has been helpful, it won’t provide the kinds of detailed reporting across dozens of victims that will enable other security teams to learn from this attack and thereby make the SVR’s job a bit harder.
And while we’re at it, let’s make sure Congress passes a federal data breach law that covers the thousands of secret breaches that occur every year but aren’t publicly discussed. Such attacks have included attempts to acquire critical vaccine data, rocket designs or trade secrets. But there’s no law requiring that they be disclosed unless they include the credit card numbers, email addresses and other bits of personal information covered by state breach laws. Our society can’t respond to the overall risk as long as we’re discussing only a fraction of the significant security failures.
Second, Congress and the new administration can work together to put defensive cybersecurity on the same level as offensive cyber operations and intelligence gathering. The Cybersecurity and Infrastructure Security Agency (CISA) was created only two years ago to coordinate defending both the public and private sectors. While CISA quickly established itself under director Chris Krebs, who was fired by President Trump for his truthful statements regarding election security, the size and technical competence of the agency does not yet match up to that of its offensive cousins.
CISA has about 2,200 employees spread across its cyber and infrastructure responsibilities. By comparison, the National Security Agency, only one of 17 members of the U.S. intelligence community, has more than 40,000. Patching routers at the Interior Department isn’t as sexy as destroying Iranian centrifuges or reading the correspondence of the Chinese Communist Party, but it is certainly just as important when you consider that the United States has the largest, most technologically advanced, and therefore most vulnerable, economy in the world.
Third, the Biden administration can appoint individuals with practical, hands-on defensive experience to key roles in the White House and critical agencies. Official Washington has long disrespected cybersecurity expertise in a way that would be unthinkable in other complex professions. The Senate would never confirm a malpractice attorney to be a surgeon general, and the president would never ask a Judge Advocate General Corps officer to serve as chairman of the Joint Chiefs of Staff.
But this, in effect, is just how Washington has treated cybersecurity — as something best understood by the lawyers who prosecute cybercrime and defend breached companies. This isn’t to dismiss the contributions made by members of the legal profession; there are many smart, dedicated lawyers working in the cybersecurity field. Even so, the Biden cybersecurity team should include the voices of people who have real experience preventing, detecting and responding to crises like the one our country is facing today. It’s long overdue that we started treating cyberthreats with the seriousness they deserve.
Six Russian GRU Officers Charged in Connection with Worldwide Deployment of Destructive Malware and Other Disruptive Actions in Cyberspace
19 October 2020 | Department of Justice, United States of America | https://www.justice.gov/opa/pr/six-russian-gru-officers-charged-connection-worldwide-deployment-destructive-malware-and
“On Oct. 15, 2020, a federal grand jury in Pittsburgh returned an indictment charging six computer hackers, all of whom were residents and nationals of the Russian Federation (Russia) and officers in Unit 74455 of the Russian Main Intelligence Directorate (GRU), a military intelligence agency of the General Staff of the Armed Forces.
These GRU hackers and their co-conspirators engaged in computer intrusions and attacks intended to support Russian government efforts to undermine, retaliate against, or otherwise destabilize: (1) Ukraine; (2) Georgia; (3) elections in France; (4) efforts to hold Russia accountable for its use of a weapons-grade nerve agent, Novichok, on foreign soil; and (5) the 2018 PyeongChang Winter Olympic Games after Russian athletes were banned from participating under their nation’s flag, as a consequence of Russian government-sponsored doping effort.
Their computer attacks used some of the world’s most destructive malware to date, including: KillDisk and Industroyer, which each caused blackouts in Ukraine; NotPetya, which caused nearly $1 billion in losses to the three victims identified in the indictment alone; and Olympic Destroyer, which disrupted thousands of computers used to support the 2018 PyeongChang Winter Olympics. The indictment charges the defendants with conspiracy, computer hacking, wire fraud, aggravated identity theft, and false registration of a domain name.
Read more
According to the indictment, beginning in or around November 2015 and continuing until at least in or around October 2019, the defendants and their co-conspirators deployed destructive malware and took other disruptive actions, for the strategic benefit of Russia, through unauthorized access to victim computers (hacking). As alleged, the conspiracy was responsible for the following destructive, disruptive, or otherwise destabilizing computer intrusions and attacks:
Cybersecurity researchers have tracked the Conspirators and their malicious activity using the labels “Sandworm Team,” “Telebots,” “Voodoo Bear,” and “Iron Viking.”
The charges were announced by Assistant Attorney General John C. Demers; FBI Deputy Director David Bowdich; U.S. Attorney for the Western District of Pennsylvania Scott W. Brady; and Special Agents in Charge of the FBI’s Atlanta, Oklahoma City, and Pittsburgh Field Offices, J.C. “Chris” Hacker, Melissa R. Godbold, and Michael A. Christman, respectively.
“No country has weaponized its cyber capabilities as maliciously or irresponsibly as Russia, wantonly causing unprecedented damage to pursue small tactical advantages and to satisfy fits of spite,” said Assistant Attorney General for National Security John C. Demers. “Today the department has charged these Russian officers with conducting the most disruptive and destructive series of computer attacks ever attributed to a single group, including by unleashing the NotPetya malware. No nation will recapture greatness while behaving in this way.”
“The FBI has repeatedly warned that Russia is a highly capable cyber adversary, and the information revealed in this indictment illustrates how pervasive and destructive Russia’s cyber activities truly are,” said FBI Deputy Director David Bowdich. “But this indictment also highlights the FBI’s capabilities. We have the tools to investigate these malicious malware attacks, identify the perpetrators, and then impose risks and consequences on them. As demonstrated today, we will relentlessly pursue those who threaten the United States and its citizens.”
“For more than two years we have worked tirelessly to expose these Russian GRU Officers who engaged in a global campaign of hacking, disruption and destabilization, representing the most destructive and costly cyber-attacks in history,” said U.S. Attorney Scott W. Brady for the Western District of Pennsylvania. “The crimes committed by Russian government officials were against real victims who suffered real harm. We have an obligation to hold accountable those who commit crimes – no matter where they reside and no matter for whom they work – in order to seek justice on behalf of these victims.”
“The exceptional talent and dedication of our teams in Pittsburgh, Atlanta and Oklahoma City who spent years tracking these members of the GRU is unmatched,” said FBI Pittsburgh Special Agent in Charge Michael A. Christman. “These criminals underestimated the power of shared intelligence, resources and expertise through law enforcement, private sector and international partnerships.”
The defendants, Yuriy Sergeyevich Andrienko (Юрий Сергеевич Андриенко), 32; Sergey Vladimirovich Detistov (Сергей Владимирович Детистов), 35; Pavel Valeryevich Frolov (Павел Валерьевич Фролов), 28; Anatoliy Sergeyevich Kovalev (Анатолий Сергеевич Ковалев), 29; Artem Valeryevich Ochichenko (Артем Валерьевич Очиченко), 27; and Petr Nikolayevich Pliskin (Петр Николаевич Плискин), 32, are all charged in seven counts: conspiracy to conduct computer fraud and abuse, conspiracy to commit wire fraud, wire fraud, damaging protected computers, and aggravated identity theft. Each defendant is charged in every count. The charges contained in the indictment are merely accusations, however, and the defendants are presumed innocent unless and until proven guilty beyond a reasonable doubt.
The indictment accuses each defendant of committing the following overt acts in furtherance of the charged crimes:
Defendant
Summary of Overt Acts
Yuriy Sergeyevich Andrienko
Sergey Vladimirovich Detistov
Pavel Valeryevich Frolov
Anatoliy Sergeyevich Kovalev
Artem Valeryevich Ochichenko
Petr Nikolayevich Pliskin
The defendants and their co-conspirators caused damage and disruption to computer networks worldwide, including in France, Georgia, the Netherlands, Republic of Korea, Ukraine, the United Kingdom, and the United States.
The NotPetya malware, for example, spread worldwide, damaged computers used in critical infrastructure, and caused enormous financial losses. Those losses were only part of the harm, however. For example, the NotPetya malware impaired Heritage Valley’s provision of critical medical services to citizens of the Western District of Pennsylvania through its two hospitals, 60 offices, and 18 community satellite facilities. The attack caused the unavailability of patient lists, patient history, physical examination files, and laboratory records. Heritage Valley lost access to its mission-critical computer systems (such as those relating to cardiology, nuclear medicine, radiology, and surgery) for approximately one week and administrative computer systems for almost one month, thereby causing a threat to public health and safety.
The conspiracy to commit computer fraud and abuse carries a maximum sentence of five years in prison; conspiracy to commit wire fraud carries a maximum sentence of 20 years in prison; the two counts of wire fraud carry a maximum sentence of 20 years in prison; intentional damage to a protected computer carries a maximum sentence of 10 years in prison; and the two counts of aggravated identity theft carry a mandatory sentence of two years in prison. The indictment also alleges false registration of domain names, which would increase the maximum sentence of imprisonment for wire fraud to 27 years in prison; the maximum sentence of imprisonment for intentional damage to a protected computer to 17 years in prison; and the mandatory sentence of imprisonment for aggravated identity theft to four years in prison. These maximum potential sentences are prescribed by Congress, however, and are provided here for informational purposes only, as the assigned judge will determine any sentence of a defendant.
Defendant Kovalev was previously charged in federal indictment number CR 18-215, in the District of Columbia, with conspiring to gain unauthorized access into the computers of U.S. persons and entities involved in the administration of the 2016 U.S. elections.
Trial Attorney Heather Alpino and Deputy Chief Sean Newell of the National Security Division’s Counterintelligence and Export Control Section and Assistant U.S. Attorneys Charles Eberle and Jessica Smolar of the U.S. Attorney’s Office for the Western District of Pennsylvania are prosecuting this case. The FBI’s Atlanta, Oklahoma City, and Pittsburgh field offices conducted the investigation, with the assistance of the FBI’s Cyber Division.
The Criminal Division’s Office of International Affairs provided critical assistance in this case. The department also appreciates the significant cooperation and assistance provided by Ukrainian authorities, the Governments of the Republic of Korea and New Zealand, Georgian authorities, and the United Kingdom’s intelligence services, as well as many of the FBI’s Legal Attachés and other foreign authorities around the world. Numerous victims cooperated and provided valuable assistance in the investigation.
The department is also grateful to Google, including its Threat Analysis Group (TAG); Cisco, including its Talos Intelligence Group; Facebook; and Twitter, for the assistance they provided in this investigation. Some private sector companies independently disabled numerous accounts for violations of the companies’ terms of service.”
Privacy+Group Dynamics=Irrationality=WhatsApp
If you aren’t on WhatsApp you’re behind the times. But maybe that’s good. Reading this article will remind you of the down-side of participating in secretive groups. Unfortunately, the author does not propose any solutions. I think the current accusation against social media for violating privacy has it backwards. Privacy is worse than transparency 90 percent of the time, in my opinion. It is unethical to sneak around speculating about others. Ask the person you suspect, flat out, what they are up to — and publicize their answers. That’s my rule. Anyhow, please find time to read this dispiriting essay about how closed groups actually behave. Thank you, Guardian.
What’s Wrong With WhatsApp
As social media has become more inhospitable, the appeal of private online groups has grown. But they hold their own dangers—to those both inside and out.
By William Davies
In the spring, as the virus swept across the world and billions of people were compelled to stay at home, the popularity of one social media app rose more sharply than any other. By late March, usage of WhatsApp around the world had grown by 40%. In Spain, where the lockdown was particularly strict, it rose by 76%. In those early months, WhatsApp – which hovers neatly between the space of email, Facebook and SMS, allowing text messages, links and photos to be shared between groups – was a prime conduit through which waves of news, memes and mass anxiety travelled.
At first, many of the new uses were heartening. Mutual aid groups sprang up to help the vulnerable. Families and friends used the app to stay close, sharing their fears and concerns in real time. Yet by mid-April, the role that WhatsApp was playing in the pandemic looked somewhat darker. A conspiracy theory about the rollout of 5G, which originated long before Covid-19 had appeared, now claimed that mobile phone masts were responsible for the disease. Across the UK, people began setting fire to 5G masts, with 20 arson attacks over the Easter weekend alone.
Read more
WhatsApp, along with Facebook and YouTube, was a key channel through which the conspiracy theory proliferated. Some feared that the very same community groups created during March were now accelerating the spread of the 5G conspiracy theory. Meanwhile, the app was also enabling the spread of fake audio clips, such as a widely shared recording in which someone who claimed to work for the NHS reported that ambulances would no longer be sent to assist people with breathing difficulties.
Read more
This was not the first time that WhatsApp has been embroiled in controversy. While the “fake news” scandals surrounding the 2016 electoral upsets in the UK and US were more focused upon Facebook – which owns WhatsApp – subsequent electoral victories for Jair Bolsonaro in Brazil and Narendra Modi in India were aided by incendiary WhatsApp messaging, exploiting the vast reach of the app in these countries. In India, there have also been reports of riots and at least linked to rumours circulating on WhatsApp. India’s Ministry of Information and Broadcasting has sought ways of regulating WhatsApp content, though this has led to new controversies about government infringement on civil liberties.
As ever, there is a risk of pinning too much blame for complex political crises on an inert technology. WhatsApp has also taken some steps to limit its use as a vehicle for misinformation. In March, a WhatsApp spokesperson told the Washington Post that the company had “engaged health ministries around the world to provide simple ways for citizens to receive accurate information about the virus”. But even away from such visible disruptions, WhatsApp does seem to be an unusually effective vehicle for sowing distrust in public institutions and processes.
A WhatsApp group can exist without anyone outside the group knowing of its existence, who its members are or what is being shared, while end-to-end encryption makes it immune to surveillance from third parties. Back in Britain’s pre-Covid-19 days, when Brexit and Jeremy Corbyn were the issues that provoked the most feverish political discussions, speculation and paranoia swirled around such groups. Media commentators who defended Corbyn were often accused of belonging to a WhatsApp group of “outriders”, co-ordinated by Corbyn’s office, which supposedly told them what line to take. Meanwhile, the Conservative party’s pro-Brexit European Research Group was said to be chiefly sustained in the form of a WhatsApp group, whose membership was never public. Secretive coordination – both real and imagined – does not strengthen confidence in democracy.
WhatsApp groups can not only breed suspicion among the public, but also manufacture a mood of suspicion among their own participants. As also demonstrated by closed Facebook groups, discontents – not always well-founded – accumulate in private before boiling over in public. The capacity to circulate misinformation and allegations is becoming greater than the capacity to resolve them.
The political threat of WhatsApp is the flipside of its psychological appeal. Unlike so many other social media platforms, WhatsApp is built to secure privacy. On the plus side, this means intimacy with those we care about and an ability to speak freely; on the negative side, it injects an ethos of secrecy and suspicion into the public sphere. As Facebook, Twitter and Instagram become increasingly theatrical – every gesture geared to impress an audience or deflect criticism – WhatsApp has become a sanctuary from a confusing and untrustworthy world, where users can speak more frankly. As trust in groups grows, so it is withdrawn from public institutions and officials. A new common sense develops, founded on instinctive suspicion towards the world beyond the group.
The ongoing rise of WhatsApp, and its challenge to both legacy institutions and open social media, poses a profound political question: how do public institutions and discussions retain legitimacy and trust once people are organised into closed and invisible communities? The risk is that a vicious circle ensues, in which private groups circulate ever more information and disinformation to discredit public officials and public information, and our alienation from democracy escalates.
When WhatsApp was bought by Facebook in 2014 for $19bn, it was the most valuable tech acquisition in history. At the time, WhatsApp brought 450 million users with it. In February this year, it hit 2 billion users worldwide – and that is even before its lockdown surge – making it by far the most widely used messenger app, and the second most commonly used app after Facebook itself. In many countries, it is now the default means of digital communication and social coordination, especially among younger people.
The features that would later allow WhatsApp to become a conduit for conspiracy theory and political conflict were ones never integral to SMS, and have more in common with email: the creation of groups and the ability to forward messages. The ability to forward messages from one group to another – in response to Covid-19-related misinformation – makes for a potent informational weapon. Groups were initially limited in size to 100 people, but this was later increased to 256. That’s small enough to feel exclusive, but if 256 people forward a message on to another 256 people, 65,536 will have received it.
Groups originate for all sorts of purposes – a party, organising amateur sport, a shared interest – but then take on a life of their own. There can be an anarchic playfulness about this, as a group takes on its own set of in-jokes and traditions. In a New York Magazine piece last year, under the headline “Group chats are making the internet fun again”, the technology critic Max Read argued that groups have become “an outright replacement for the defining mode of social organization of the past decade: the platform-centric, feed-based social network.”
It’s understandable that in order to relax, users need to know they’re not being overheard – though there is a less playful side to this. If groups are perceived as a place to say what you really think, away from the constraints of public judgement or “political correctness”, then it follows that they are also where people turn to share prejudices or more hateful expressions, that are unacceptable (or even illegal) elsewhere. Santiago Abascal, the leader of the Spanish far-right party Vox, has defined his party as one willing to “defend what Spaniards say on WhatsApp”.
A different type of group emerges where its members are all users of the same service, such as a school, a housing block or a training programme. A potential problem here is one of negative solidarity, in which feelings of community are deepened by turning against the service in question. Groups of this sort typically start from a desire to pool information – students staying in touch about deadlines, say – but can swiftly become a means of discrediting the institution they cluster around. Initial murmurs of dissatisfaction can escalate rapidly, until the group has forged an identity around a spirit of resentment and alienation, which can then be impossible to dislodge with countervailing evidence.
Faced with the rise of new technologies, one option for formal organisations and associations is to follow people to their preferred platform. In March, the government introduced a WhatsApp-based information service about Covid-19, with an automated chatbot. But groups themselves can be an unreliable means of getting crucial information to people. Anecdotal evidence from local political organisers and trade union reps suggests that, despite the initial efficiency of WhatsApp groups, their workload often increases because of the escalating number of sub-communities, each of which needs to be contacted separately. Schools desperately seek to get information out to parents, only to discover that unless it appears in precisely the right WhatsApp group, it doesn’t register. The age of the message board, be it physical or digital, where information can be posted once for anyone who needs it, is over.
WhatsApp’s ‘broadcast list’ function, which allows messages to be sent to multiple recipients who are invisible to one another (like email’s ‘bcc’ line), alleviates some of the problems of groups taking on a life of their own. But even then, lists can only include people who are already mutual contacts of the list-owner. The problem, from the point of view of institutions, is that WhatsApp use seems fuelled by a preference for informal, private communication as such. University lecturers are frequently baffled by the discovery that many students and applicants don’t read email. If email is going into decline, WhatsApp does not seem to be a viable alternative when it comes to sharing verified information as widely and inclusively as possible.
Groups are great for brief bursts of humour or frustration, but, by their very nature, far less useful for supporting the circulation of public information. To understand why this is the case, we have to think about the way in which individuals can become swayed and influenced once they belong to a group.
The internet has brought with it its own litany of social pathologies and threats. Trolling, flaming, doxing, cancelling and pile-ons are all risks that go with socialising within a vast open architecture. “Open” platforms such as Twitter are reminders that much social activity tends to be aimed at a small and select community, but can be rendered comical or shameful when exposed to a different community altogether.
As any frequent user of WhatsApp or a closed Facebook group will recognise, the moral anxiety associated with groups is rather different. If the worry in an open network is of being judged by some outside observer, be it one’s boss or an extended family member, in a closed group it is of saying something that goes against the codes that anchor the group’s identity. Groups can rapidly become dominated by a certain tone or worldview that is uncomfortable to challenge and nigh-impossible to dislodge. WhatsApp is a machine for generating feelings of faux pas, as comments linger in a group’s feed, waiting for a response.
This means that while groups can generate high levels of solidarity, which can in principle be put to powerful political effect, it also becomes harder to express disagreement within the group. If, for example, an outspoken and popular member of a neighbourhood WhatsApp group begins to circulate misinformation about health risks, the general urge to maintain solidarity means that their messages are likely to be met with approval and thanks. When a claim or piece of content shows up in a group, there may be many members who view it as dubious; the question is whether they have the confidence to say as much. Meanwhile, the less sceptical can simply forward it on. It’s not hard, then, to understand why WhatsApp is a powerful distributor of “fake news” and conspiracy theories.
As on open social platforms, one of the chief ways of building solidarity on WhatsApp is to posit some injustice or enemy that threatens the group and its members. In the most acute examples, conspiracy theories are unleashed against political opponents, to the effect that they are paedophiles or secret affiliates of foreign powers. Such plausibly deniable practices swirled around the fringes of the successful election campaigns of Modi, Bolsonaro and Donald Trump, and across multiple platforms.
But what makes WhatsApp potentially more dangerous than public social media are the higher levels of trust and honesty that are often present in private groups. It is a truism that nobody is as happy as they appear on Facebook, as attractive as they appear on Instagram or as angry as they appear on Twitter, which spawns a growing weariness with such endless performance. By contrast, closed groups are where people take off their public masks and let their critical guard down. Neither anonymity (a precondition of most trolling) nor celebrity are on offer. The speed with which rumours circulate on WhatsApp is partly a reflection of how altruistic and uncritical people can be in groups. Most of the time, people seem to share false theories about Covid-19 not with the intention of doing harm, but precisely out of concern for other group members. Anti-vaxx, anti-5G or anti-Hillary rumours combine an identification of an enemy with a strong internal sense of solidarity. Nevertheless, they add to the sense that the world is hostile and dangerous.
There is one particular pattern of a group chat that can manufacture threats and injustices out of thin air. It tends to start with one participant speculating that they are being let down or targeted by some institution or rival group – be it a public service, business or cultural community – whereupon a second participant agrees. By this stage, it becomes risky for anyone else to defend the institution or group in question, and immediately a new enemy and a new resentment is born. Instantly, the warnings and denunciations emanating from within the group take on a level of authenticity that cannot be matched by the entity that is now the object of derision.
But what if the first contributor has misunderstood or misread something, or had a very stressful day and needs to let off steam? And what if the second is merely agreeing so as to make the first one feel better? And what if the other members are either too distracted, too inhibited or too exhausted to say anything to oppose this fresh indignation? This needn’t snowball into the forms of conspiracy theory that produce riots or arson attacks. But even in milder forms, it makes the job of communicating official information – occasionally life-saving information – far more troublesome than it was just a decade ago. Information about public services and health risks is increasingly having to penetrate a thicket of overlapping groups, many of which may have developed an instinctive scepticism to anything emanating from the “mainstream”.
Part of the challenge for institutions is that there is often a strange emotional comfort in the shared feeling of alienation and passivity. “We were never informed about that”, “nobody consulted us”, “we are being ignored”. These are dominant expressions of our political zeitgeist. As WhatsApp has become an increasingly common way of encountering information and news, a vicious circle can ensue: the public world seems ever more distant, impersonal and fake, and the private group becomes a space of sympathy and authenticity.
This is a new twist in the evolution of the social internet. Since the 90s, the internet has held out a promise of connectivity, openness and inclusion, only to then confront inevitable threats to privacy, security and identity. By contrast, groups make people feel secure and anchored, but also help to fragment civil society into separate cliques, unknown to one another. This is the outcome of more than 20 years of ideological battles over what sort of social space the internet should be.
For a few years at the dawn of the millennium, the O’Reilly Emerging Technology Conferences (or ETech), were a crucible in which a new digital world was imagined and debated. Launched by the west coast media entrepreneur Tim O’Reilly and hosted annually around California, the conferences attracted a mixture of geeks, gurus, designers and entrepreneurs, brought together more in a spirit of curiosity than of commerce. In 2005, O’Reilly coined the term “web 2.0” to describe a new wave of websites that connected users with each other, rather than with existing offline institutions. Later that year, the domain name facebook.com was purchased by a 21-year-old Harvard student, and the age of the giant social media platforms was born.
Within this short window of time, we can see competing ideas of what a desirable online community might look like. The more idealistic tech gurus who attended ETech insisted that the internet should remain an open public space, albeit one in which select communities could cluster for their own particular purposes, such as creating open-source software projects or Wikipedia entries. The untapped potential of the internet, they believed, was for greater democracy. But for companies such as Facebook, the internet presented an opportunity to collect data about users en masse. The internet’s potential was for greater surveillance. The rise of the giant platforms from 2005 onwards suggested the latter view had won out. And yet, in a strange twist, we are now witnessing a revival of anarchic, self-organising digital groups – only now, in the hands of Facebook as well. The two competing visions have collided.
To see how this story unfolded, it’s worth going back to 2003. At the ETech conference that year, a keynote speech was given by the web enthusiast and writer Clay Shirky, now an academic at New York University, which surprised its audience by declaring that the task of designing successful online communities had little to do with technology at all. The talk looked back at one of the most fertile periods in the history of social psychology, and was entitled “A group is its own worst enemy”.
Shirky drew on the work of the British psychoanalyst and psychologist Wilfred Bion, who, together with Kurt Lewin, was one of the pioneers of the study of “group dynamics” in the 40s. The central proposition of this school was that groups possess psychological properties that exist independently of their individual members. In groups, people find themselves behaving in ways that they never would if left to their own devices.
Like Stanley Milgram’s notorious series of experiments to test obedience in the early 60s – in which some participants were persuaded to administer apparently painful electric shocks to others – the mid-20th century concern with group dynamics grew in the shadow of the political horrors of the 30s and 40s, which had posed grave questions about how individuals come to abandon their ordinary sense of morality. Lewin and Bion posited that groups possess distinctive personalities, which emerge organically through the interaction of their members, independently of what rules they might have been given, or what individuals might rationally do alone.
With the dawn of the 60s, and its more individualistic political hopes, psychologists’ interest in groups started to wane. The assumption that individuals are governed by conformity fell by the wayside. When Shirky introduced Bion’s work at the O’Reilly conference in 2003, he was going out on a limb. What he correctly saw was that, in the absence of any explicit structures or rules, online communities were battling against many of the disruptive dynamics that fascinated the psychologists of the 40s.
Shirky highlighted one area of Bion’s work in particular: how groups can spontaneously sabotage their own stipulated purpose. The beauty of early online communities, such as listservs, message boards and wikis, was their spirit of egalitarianism, humour and informality. But these same properties often worked against them when it came to actually getting anything constructive done, and could sometimes snowball into something obstructive or angry. Once the mood of a group was diverted towards jokes, disruption or hostility towards another group, it became very difficult to wrest it back.
Bion’s concerns originated in fear of humanity’s darker impulses, but the vision Shirky was putting to his audience that day was a more optimistic one. If the designers of online spaces could preempt disruptive “group dynamics”, he argued, then it might be possible to support cohesive, productive online communities that remained open and useful at the same time. Like a well-designed park or street, a well-designed online space might nurture healthy sociability without the need for policing, surveillance or closure to outsiders. Between one extreme of anarchic chaos (constant trolling), and another of strict moderation and regulation of discussion (acceding to an authority figure), thinking in terms of group dynamics held out the promise of a social web that was still largely self-organising, but also relatively orderly.
But there was another solution to this same problem waiting in the wings, which would turn out to be world-changing in its consequences: forget group dynamics, and focus on reputation dynamics instead. If someone online has a certain set of offline attributes, such as a job title, an album of tagged photos, a list of friends and an email address, they will behave themselves in ways that are appropriate to all of these fixed public identifiers. Add more and more surveillance into the mix, both by one’s peers and by corporations, and the problem of spontaneous group dynamics disappears. It is easier to hold on to your self-control and your conscience if you are publicly visible, including to friends, extended family and colleagues.
For many of the Californian pioneers of cyberculture, who cherished online communities as an escape from the values and constraints of capitalist society, Zuckerberg’s triumph represents an unmitigated defeat. Corporations were never meant to seize control of this space. As late as 2005, the hope was that the social web would be built around democratic principles and bottom-up communities. Facebook abandoned all of that, by simply turning the internet into a multimedia telephone directory.
The last ETech was held in 2009. Within a decade, Facebook was being accused of pushing liberal democracy to the brink and even destroying truth itself. But as the demands of social media have become more onerous, with each of us curating a profile and projecting an identity, the lure of the autonomous group has resurfaced once again. In some respects, Shirky’s optimistic concern has now become today’s pessimistic one. Partly thanks to WhatsApp, the unmoderated, self-governing, amoral collective – larger than a conversation, smaller than a public – has become a dominant and disruptive political force in our society, much as figures such as Bion and Lewin feared.
Conspiracy theories and paranoid group dynamics were features of political life long before WhatsApp arrived. It makes no sense to blame the app for their existence, any more than it makes sense to blame Facebook for Brexit. But by considering the types of behaviour and social structures that technologies enable and enhance, we get a better sense of some of society’s characteristics and ailments. What are the general tendencies that WhatsApp helps to accelerate?
First of all, there is the problem of conspiracies in general. WhatsApp is certainly an unbeatable conduit for circulating conspiracy theories, but we must also admit that it seems to be an excellent tool for facilitating genuinely conspiratorial behaviour. One of the great difficulties when considering conspiracy theories in today’s world is that, regardless of WhatsApp, some conspiracies turn out to be true: consider Libor-fixing, phone-hacking, or efforts by Labour party officials to thwart Jeremy Corbyn’s electoral prospects. These all happened, but one would have sounded like a conspiracy theorist to suggest them until they were later confirmed by evidence.
A communication medium that connects groups of up to 256 people, without any public visibility, operating via the phones in their pockets, is by its very nature, well-suited to supporting secrecy. Obviously not every group chat counts as a “conspiracy”. But it makes the question of how society coheres, who is associated with whom, into a matter of speculation – something that involves a trace of conspiracy theory. In that sense, WhatsApp is not just a channel for the circulation of conspiracy theories, but offers content for them as well. The medium is the message.
The full political potential of WhatsApp has not been witnessed in the UK. To date, it has not served as an effective political campaigning tool, partly because users seem reluctant to join large groups with people they don’t know. However, the influence – imagined or real – of WhatsApp groups within Westminster and the media undoubtedly contributes to the deepening sense that public life is a sham, behind which lurk invisible networks through which power is coordinated. WhatsApp has become a kind of “backstage” of public life, where it is assumed people articulate what they really think and believe in secret. This is a sensibility that has long fuelled conspiracy theories, especially antisemitic ones. Invisible WhatsApp groups now offer a modern update to the type of “explanation” that once revolved around Masonic lodges or the Rothschilds.
Away from the world of party politics and news media, there is the prospect of a society organised as a tapestry of overlapping cliques, each with their own internal norms. Groups are less likely to encourage heterodoxy or risk-taking, and more likely to inculcate conformity, albeit often to a set of norms hostile to those of the “mainstream”, whether that be the media, politics or professional public servants simply doing their jobs. In the safety of the group, it becomes possible to have one’s cake and eat it, to be simultaneously radical and orthodox, hyper-sceptical and yet unreflective.
For all the benefits that WhatsApp offers in helping people feel close to others, its rapid ascendency is one further sign of how a common public world – based upon verified facts and recognised procedures – is disintegrating. WhatsApp is well equipped to support communications on the margins of institutions and public discussion: backbenchers plotting coups, parents gossiping about teachers, friends sharing edgy memes, journalists circulating rumours, family members forwarding on unofficial medical advice. A society that only speaks honestly on the margins like this will find it harder to sustain the legitimacy of experts, officials and representatives who, by definition, operate in the spotlight. Meanwhile, distrust, alienation and conspiracy theories become the norm, chipping away at the institutions that might hold us together.
William Davies is a sociologist and political economist. His latest book is “Nervous States: How Feeling Took Over the World.”
July 2, 2020. https://getpocket.com/explore/item/what-s-wrong-with-whatsapp?utm_source=pocket-newtab
</em>
“Democrats Push for More Transparency about Russian Election Interference”
By Joseph Marks
“Top Democrats are slamming the Trump administration for not sharing enough information with the public about Russian efforts to interfere in November’s election.”
“While intelligence officials have warned that U.S. adversaries are trying to hack into political campaigns and election systems – and cited Russia, China and Iran as the biggest threats — House Speaker Nancy Pelosi (D-Calif.) and Senate Minority Leader Chuck Schumer (D-N.Y.) say that’s not enough to help voters gird themselves against social media disinformation or the sort of hacking and leaking campaign that upended Hillary Clinton’s campaign in 2016.”
Read more
The top-line announcement that interference exists doesn’t “go nearly far enough in arming the American people with the knowledge they need about how foreign powers are seeking to influence our political process,” Schumer and Pelosi warned in a statement.
“The Russians are once again trying to influence the election and divide Americans, and these efforts must be deterred, disrupted and exposed,” they continue. The statement was also signed by House Intelligence Chairman Adam B. Schiff (D-Calif.) and Sen. Mark Warner (D-Va.), the top Democrat on the Senate Intelligence Committee.
The push comes as Joe Biden seeks to project strength on election interference and draw a stark contrast with President Trump.
The presumptive Democratic nominee promised to punch back hard against Russia if he becomes president and “make full use of my executive authority to impose substantial and lasting costs on state perpetrators [of election interference].”
…
To read more about this article: https://www.washingtonpost.com/politics/2020/07/28/cybersecurity-202-democrats-push-more-transparency-about-russian-election-interference/
Title: “Democrats Push for More Transparency about Russian Election Interference”
By: Joseph Marks
Does this ever come up in political party deliberations? The parties all set out their platforms before the election (well, this year the Republicans aren’t bothering to do so, since the only thing they care about is showing fealty toward Trump). Does anyone know how these platforms are created and whether there is room for anyone to refer to the nuclear risk as a problem?
First, find the bugs
Yes, let’s regulate. But first we have to locate the bugs. And it is “ethical hackers” who are providing the crucial information about the bugs in software that we buy. There are no parliamentarians capable of finding these mistakes — or instances of sloppy coding. Thank you, Black Hat, for holding conventions where your ethical hackers blow such good whistles.
Lawmakers Wont (really Cannot) Regulate the Internet of Things
If you think it’s bad now, just wait five years. The Internet of Things is gradually coming in, and making us all more vulnerable, since computers are linked to computers, which are linked to other computers that we depend on for everything in our daily lives. And the TRULY scary part is that legislators dont know enough to enact the kind of laws we need.
Who understands the technology well enough to write sensible legislation? I think we’d need a whole new institution staffed by ex-hackers but paid by the government. And then they would probably set up rules that we would personally challenge, so there needs to be some kind of appeal too. Gets complicated fast!
Beware Chinese Drones- They Might Be Spying on Us!
By: Joseph Marks
“Researchers are warning about cybersecurity vulnerabilities in an Android app that powers a popular Chinese-made drone they say could help the Chinese government scoop up reams of information.
The accusation comes amid a diplomatic clash between Washington and Beijing over everything from trade to the search for a coronavirus vaccine and it’s sure to worsen U.S. distrust of a broad range of consumer technology.”
Read more
“That distrust has embroiled everything from the telecom giant Huawei to the video app TikTok. The concerns have deepened during the coronavirus pandemic and are threatening to create a permanent fissure between Western and Chinese technology.
The vulnerability could allow DJI, the world’s largest drone maker, or someone with access to its computer systems, to grab information from the microphones, cameras, contacts and even locations of hundreds of thousands of drone owners worldwide, the cybersecurity firms Grimm and Synacktiv found.”
The company is also able to send automatic updates to the apps without Google or the drone owner consenting or even necessarily knowing the app is being updated, researchers found. Theoretically that update function could be used to load the phones with malware that could send troves of data back to China, they said.
The feature is only present in Android apps used by consumer drone owners, not in the version used by companies and government agencies. It’s also not present in the iPhone version of the app.
…
To read the full article on the Washington Post: https://www.washingtonpost.com/politics/2020/07/24/cybersecurity-202-drone-vulnerabilities-add-us-china-spying-tensions/?utm_campaign=wp_the_cybersecurity_202&utm_medium=email&utm_source=newsletter&wpisrc=nl_cybersecurity202
Title: Security Vulnerabilities in Chinese Drones Ratchets Up Spying Fears
By: Joseph Marks
Okay this was published in 2017 — over three years ago. Blair would have informed the authorities long before publishing this in the NY Times. So what has happened since then? Don’t we deserve an explanation?
Clueless governments
With such fast-paced technological advancement, how will governments keep up with policies that protect its citizens?
Right. The Russians and Chinese are blaming the virus on the US, and Trump is blaming it on China. (Notice: Not Russia.) But surely the citizens can see through such silly allegations, right? (Can’t they???)
We need more government regulation on software controlled components…
If they sell it, they should be responsible for making it work
We should hold all conglomerates responsible for their faulty technology! Since they’re selling us these products, they have to ensure our safety as the consumer!
Why Our Nuclear Weapons Can Be Hacked
By Bruce G. Blair
New York Times, 14 March 2017
Article Excerpt(s):
“It is tempting for the United States to exploit its superiority in cyberwarfare to hobble the nuclear forces of North Korea or other opponents. As a new form of missile defense, cyberwarfare seems to offer the possibility of preventing nuclear strikes without the firing of a single nuclear warhead.
But as with many things involving nuclear weaponry, escalation of this strategy has a downside: United States forces are also vulnerable to such attacks.
Imagine the panic if we had suddenly learned during the Cold War that a bulwark of America’s nuclear deterrence could not even get off the ground because of an exploitable deficiency in its control network.
We had such an Achilles’ heel not so long ago. Minuteman missiles were vulnerable to a disabling cyberattack, and no one realized it for many years. If not for a curious and persistent President Barack Obama, it might never have been discovered and rectified.
In 2010, 50 nuclear-armed Minuteman missiles sitting in underground silos in Wyoming mysteriously disappeared from their launching crews’ monitors for nearly an hour. The crews could not have fired the missiles on presidential orders or discerned whether an enemy was trying to launch them. Was this a technical malfunction or was it something sinister? Had a hacker discovered an electronic back door to cut the links? For all the crews knew, someone had put all 50 missiles into countdown to launch. The missiles were designed to fire instantly as soon as they received a short stream of computer code, and they are indifferent about the code’s source.
It was a harrowing scene, and apprehension rippled all the way to the White House. Hackers were constantly bombarding our nuclear networks, and it was considered possible that they had breached the firewalls. The Air Force quickly determined that an improperly installed circuit card in an underground computer was responsible for the lockout, and the problem was fixed.
But President Obama was not satisfied and ordered investigators to continue to look for similar vulnerabilities. Sure enough, they turned up deficiencies, according to officials involved in the investigation.
Read more
One of these deficiencies involved the Minuteman silos, whose internet connections could have allowed hackers to cause the missiles’ flight guidance systems to shut down, putting them out of commission and requiring days or weeks to repair.
These were not the first cases of cybervulnerability. In the mid-1990s, the Pentagon uncovered an astonishing firewall breach that could have allowed outside hackers to gain control over the key naval radio transmitter in Maine used to send launching orders to ballistic missile submarines patrolling the Atlantic. So alarming was this discovery, which I learned about from interviews with military officials, that the Navy radically redesigned procedures so that submarine crews would never accept a launching order that came out of the blue unless it could be verified through a second source.
Cyberwarfare raises a host of other fears. Could a foreign agent launch another country’s missiles against a third country? We don’t know. Could a launch be set off by false early warning data that had been corrupted by hackers? This is an especially grave concern because the president has only three to six minutes to decide how to respond to an apparent nuclear attack.
This is the stuff of nightmares, and there will always be some doubt about our vulnerability. We lack adequate control over the supply chain for nuclear components — from design to manufacture to maintenance. We get much of our hardware and software off-the-shelf from commercial sources that could be infected by malware. We nevertheless routinely use them in critical networks. This loose security invites an attempt at an attack with catastrophic consequences. The risk would grow exponentially if an insider, wittingly or not, shares passwords, inserts infected thumb drives or otherwise facilitates illicit access to critical computers.
One stopgap remedy is to take United States and Russian strategic nuclear missiles off hair-trigger alert. Given the risks, it is dangerous to keep missiles in this physical state, and to maintain plans for launching them on early indications of an attack. Questions abound about the susceptibility to hacking of tens of thousands of miles of underground cabling and the backup radio antennas used for launching Minuteman missiles. They (and their Russian counterparts) should be taken off alert. Better yet, we should eliminate silo-based missiles and quick-launch procedures on all sides.
But this is just a start. We need to conduct a comprehensive examination of the threat and develop a remediation plan. We need to better understand the unintended consequences of cyberwarfare — such as possibly weakening another nation’s safeguards against unauthorized launching. We need to improve control over our nuclear supply chain. And it is time to reach an agreement with our rivals on the red lines. The reddest line should put nuclear networks off limits to cyberintrusion. Despite its allure, cyberwarfare risks causing nuclear pandemonium.”
Link: https://www.nytimes.com/2017/03/14/opinion/why-our-nuclear-weapons-can-be-hacked.html
The electric grid, for sure. That doesn’t even require an enemy for us to be vulnerable. A solar flare will do it.
Censored Contagion: How Information on the Coronavirus is Managed on Chinese Social Media
By Lotus Ruan, Jeffrey Knockel, and Masashi Crete-Nishihata
The Citizen Lab (University of Toronto), 3 March 2020
Article Excerpt(s): From the Key Findings Section:
1) “YY, a live-streaming platform in China, began to censor keywords related to the coronavirus outbreak on December 31, 2019, a day after doctors (including the late Dr. Li Wenliang) tried to warn the public about the then unknown virus.
2) WeChat broadly censored coronavirus-related content (including critical and neutral information) and expanded the scope of censorship in February 2020. Censored content included criticism of government, rumours and speculative information on the epidemic, references to Dr. Li Wenliang, and neutral references to Chinese government efforts on handling the outbreak that had been reported on state media.
3) Many of the censorship rules are broad and effectively block messages that include names for the virus or sources for information about it. Such rules may restrict vital communication related to disease information and prevention.”
From the Article Itself:
(Regarding one of the methods of censorship):
“YY censors keywords client-side meaning that all of the rules to perform censorship are found inside of the application. YY has a built-in list of keywords that it uses to perform checks to determine if any of these keywords are present in a chat message before a message is sent. If a message contains a keyword from the list, then the message is not sent. The application downloads an updated keyword list each time it is run, which means the lists can change over time.
WeChat censors content server-side meaning that all the rules to perform censorship are on a remote server. When a message is sent from one WeChat user to another, it passes through a server managed by Tencent (WeChat’s parent company) that detects if the message includes blacklisted keywords before a message is sent to the recipient. Documenting censorship on a system with a server-side implementation requires devising a sample of keywords to test, running those keywords through the app, and recording the results. In previous work, we developed an automated system for testing content on WeChat to determine if it is censored.”
[…]
“On December 31, 2019, a day after Dr. Li Wenliang and seven others warned of the COVID-19 outbreak in WeChat groups, YY added 45 keywords to its blacklist, all of which made references to the then unknown virus that displayed symptoms similar to SARS (the deadly Severe Acute Respiratory Syndrome epidemic that started in southern China and spread globally in 2003).
Among the 45 censored keywords related to the COVID-19 outbreak, 40 are in simplified Chinese and five in traditional Chinese. These keywords include factual descriptions of the flu-like pneumonia disease, references to the name of the location considered as the source of the novel virus, local government agencies in Wuhan, and discussions of the similarity between the outbreak in Wuhan and SARS. Many of these keywords such as “沙士变异” (SARS variation) are very broad and effectively block general references to the virus.”
Read more
[…]
“Between January 1 and February 15, 2020, we found 516 keyword combinations directly related to COVID-19 that were censored in our scripted WeChat group chat. The scope of keyword censorship on WeChat expanded in February 2020. Between January 1 and 31, 2020, 132 keyword combinations were found censored in WeChat. Three hundred and eight-four new keywords were identified in a two week testing window between February 1 and 15.
Keyword combinations include text in simplified and traditional Chinese. We translated each keyword combination into English and, based on interpretations of the underlying context, grouped them into content categories.
Censored COVID-19-related keyword combinations cover a wide range of topics, including discussions of central leaders’ responses to the outbreak, critical and neutral references to government policies on handling the epidemic, responses to the outbreak in Hong Kong, Taiwan, and Macau, speculative and factual information on the disease, references to Dr. Li Wenliang, and collective action.”
Link: https://citizenlab.ca/2020/03/censored-contagion-how-information-on-the-coronavirus-is-managed-on-chinese-social-media/
Six Reasons the Kremlin Spreads Disinformation About the Coronavirus [Analysis]
By Jakob Kalenský
Digital Forensic Research Lab (Atlantic Council), 24 March 2020
Article Excerpt(s):
“A recent internal report published by the European Union’s diplomatic service revealed that pro-Kremlin media have mounted a “significant disinformation campaign” about the COVID-19 pandemic aimed at Europe. Previous statements by Western officials, including acting U.S. Assistant Secretary of State for Europe and Eurasia Philip Reeker, warning of the campaign suggested that its contours were already visible by the end of February 2020.
The Kremlin’s long-term strategic goal in the information sphere is enduring and stable: undermining Western unity while strengthening Kremlin influence. Pro-Kremlin information operations employ six complementary tactics to achieve that goal, and the ongoing disinformation campaign on COVID-19 is no exception.
1. Spread anti-US, anti-Western, and anti-NATO messages to weaken them
Russian media started spreading false accusations that COVID-19 was a biological weapon manufactured by the United States in late January. The claim has appeared in other languages since then. This messaging is in line with decades of Soviet and Russian propaganda that has been fabricating stories about various diseases allegedly being a U.S. creation at least since 1949.
Read more
These messages aim to deepen anti-American, or more generally, anti-Western sentiment. Sometimes, the “perpetrator” is the entire NATO alliance, not just the United States, a variation that the DFRLab has traced in languages other than Russian as well. The impact on an average consumer of these messages will be approximately the same: anti-Western, anti-NATO and anti-U.S. feelings often go hand-in-hand in Europe.
2. Sow chaos and panic
In the aftermath of a tragedy or crisis, pro-Kremlin media outlets often try to incite fear, panic, chaos, and hysteria. On several occasions, in the aftermath of a terror attack in Europe or the United States, pro-Kremlin outlets spread accusations that the attack was a false flag operation conducted by various governments or secret services against its citizens, or that it was staged to impose greater control over the public.
These campaigns aim to stoke and exploit emotions, among which fear is one of the strongest. An audience shaken by fear will be more irrational and more prone to further disinformation operations.
3. Undermine the target audience’s trust in credible sources of information, be it traditional media or the government
Another messaging tactic tries to convince the target audience that the truth is different from whatever is being said by government institutions, local authorities or the media, thereby undermining trust in credible information sources. Convincing people to believe bogus sources of information first requires persuading them that real sources of accurate information cannot be trusted.
4. Undermine trust in objective facts by spreading multiple contradictory messages
According to a March 2020 review of COVID-19-related disinformation cases conducted by EUvsDisinfo, one popular pro-Kremlin narrative alleges, “[t]he virus is a powerful biological weapon, employed by the U.S., the Brits, or the opposition in Belarus.” A few days after the EUvsDisinfo report, pro-Kremlin outlets then accused Latvia of producing the virus. Spreading multiple and often contradictory versions of events undermines trust in objective facts.
The Kremlin has deployed this tactic liberally: after the MH17 tragedy, after the attack on an humanitarian convoy in Syria, and after the attempted murder of Sergei Skripal. The aim here is not to persuade people to believe one particular version of events, but to persuade the average consumer that there are so many versions of events that the truth can never be found. This tactic can be rather effective: then-U.S. presidential candidate Donald Trump has previously said that “no one really knows who did it” [i.e. shot down MH17] despite available evidence and statements by US authorities.
5. Spread conspiracies to facilitate the acceptance of other conspiracies
People who believe one conspiracy theory are more likely to accept others. If your job is to spread lies, it helps to promote other conspiracies as well. The pro-Kremlin media has a history of spreading conspiracy theories and elevating conspiracy theorists. A global pandemic that naturally leads to rumor-mongering is an ideal opportunity to spread some additional unfounded beliefs.
6. Identify the channels spreading disinformation
In his book on disinformation, Romanian defector Ion Mihai Pacepa described “Operation Ares,” which used U.S. involvement in Vietnam to spread anti-American feelings both within the United States and abroad in an effort to isolate the United States on the international scene.
“All we had to do was to continue planting the seeds of Ares and water them day after day after day,” Pacepa wrote. “Eventually, American leftists would seize upon Ares and would start pursuing it of their own accord. In the end, our original involvement would be forgotten and Ares would take on a life of its own.”
When you spread disinformation, you not only try to influence the audience — you also gain valuable information from the audience. You identify the channels through which disinformation spreads and the intermediaries that help disinformation reach new audiences. You also see who counters your disinformation. Especially in a time of crisis, when rumors spread faster and travel further than normal, a well-organized disinformation campaign can lend valuable insight into how an adversary’s information environment is organized. This insight is extremely valuable for any future disinformation operations. Knowing who will help you spread the desired information, and whom to try to discredit ahead of time, makes new disinformation campaigns easier to mount and sustain.
Link: https://medium.com/dfrlab/commentary-six-reasons-the-kremlin-spreads-disinformation-about-the-coronavirus-8fee41444f60
They’re Negotiating Cybersecurity at the UN
By Paul Meyer
12. February 2020
Statement by ICT4Peace to Second OEWG session February 10-14, 2020, UN HQ
Along with the Media Foundation for West Africa, Association for Progressive Communications, Women’s International League for Peace and Freedom, Access Now, ICT4Peace on 12 February 2020 was invited to address the UN Members States participating in the Open Ended Working Group (OEWG) on Cybersecurity.
The presentation delivered by Daniel Stauffacher of ICT4Peace was as follows:
“Dear Chairman, members of the Secretariat and distinguished delegates,
We are grateful to address again this important Open Ended Working Group. The Chairman through his working paper has provided us with a helpful framework for focusing the work of this second substantive session.
In describing the existing and potential threats posed by irresponsible state behaviour in cyberspace, we consider that the threat to international peace and security should be preeminent. In the context of its First Committee origins, the OEWG’s efforts should focus on steps to maintain the “cyber peace” at a time of rising geo-political tensions and an expansion of offensive cyber capabilities on the part of several states.
Read more
When it comes to offensive cyber operations, there is considerable debate as to how existing international law applies to specific state uses of ICTs. While we would welcome the recognition of legally binding prohibitions on some offensive cyber actions, we recognize that in the near term, adherence to politically binding measures, such as those generated by the 2015 GGE, is probably a more feasible goal.
The Chair has posed the question as to whether member states should “..unilaterally declare to refrain from militarization/offensive use of ICTs?”.
We answer in the affirmative and remind delegates of ICT4Peace’s “Call on Governments” to publicly confirm that they will refrain from offensive cyber operations targeting critical infrastructure. This would be a proactive means for states to demonstrate their commitment in policy and practice to this agreed UN GGE norm.
We also believe it is time for sophisticated cyber/AI surveillance systems to be brought more fully into export controls as a means of preventing human rights abuses.
The Chair has flagged “attribution” as one of the issues which might require additional norms for responsible state behaviour. We believe attribution is a critical precondition for achieving accountability for state conduct in cyberspace. Capacity for objective, well-documented attribution for malicious cyber activity should be developed as a matter of urgency.
ICT4Peace has circulated a paper and launched a pilot “peer-review” process, describing one mechanism for collecting this information, drawing upon the expertise residing in the private and civil sectors. We consider that such a solid information base would complement an eventual “Peer Review Mechanism” that could serve as an inter-governmental forum for holding states to account over their cyber actions. The Human Rights Council and its Universal Periodic Review remains a relevant model in this regard.
We support the goal of the Mexican proposal to encourage reporting by states on their implementation of norms, but believe this should be but one input to a “peer review mechanism” that would provide for inter-active discussion of state conduct.
In addition, in our view, the current and future importance of cyberspace for international peace and prosperity warrants a dedicated forum under UN auspices to enable on-going consideration of international cyber security-related issues. Suitable secretariat support would be required for such a forum.
Perhaps the time has come for the UN to establish an “Office of Cyber Affairs” as a manifestation of the importance the UN Secretary General has assigned to these issues through the HLP on Digital Cooperation and the preparations for the upcoming 75th Anniversary of the founding of the United Nations.
In conclusion, ICT4Peace has high expectations for this WG’s output which it believes can only be enriched by the continued input from industry, academia and civil society as evidenced in the productive inter-session meeting of last December. ICT4Peace stands ready to support this group through its capacity building and other work as it has done over the past 15 Years.
Thank you for your attention”
Cyberattacks on Our Wastewater?
I saw a video by Vice News about the vulnerability of water and wastewater (sewage) treatment plants. Apparently many of the systems are being digitized and monitored remotely. As such, they become increasingly vulnerable to cyberattacks. The video focused on some research in Israel around protecting these vital infrastructure locations and demonstrated how easy it is to hack the system. Alarming news to watch. What other infrastructure is vulnerable to cyber security threats?
Governing State Behaviour in Cyberspace
By: Paul Meyer / November 18, 2019
This fall in New York, the United Nations held the first session of a new process to develop norms of responsible state behaviour in cyberspace. Since 1998, the UN has been addressing the challenge of defining the “rules of the road” for state activity in cyberspace, under the rubric of “Developments in the field of Information and Communication Technology (ICT) in the context of International Security.”
Such rules are desperately needed as this unique domain is being subjected to a growing assault by state-conducted cyber operations of ever-greater sophistication and magnitude, while remaining under a mantle of secrecy. A key question is whether the underlying great power rivalries that are generating this increase in offensive cyber capabilities will be amenable to diplomatic efforts to prevent conflict in cyberspace.
Read more
Diplomacy has lagged well behind the pace of militarization of cyber space in recent years. The US director of national intelligence has estimated that over 30 states now possess offensive cyber capabilities. State-on-state cyber interference, be it for espionage or more damaging military aims, is on the rise, with civilians becoming only so much collateral damage in the process. The costs of this trend will not be limited to degrading international cyber security. The potential of the digital world for advancing the UN’s Sustainable Development Goals could be undermined if the international community is unable to fashion some normative governance framework for state cyber operations.
At the UN, a series of groups of governmental experts (known as GGEs) — each with a. restricted membership of about 15-20 UN members — managed to issue consensus reports in 2010, 2013 and 2015, which proposed a set of norms to govern state behaviour in cyber space. In 2018, however, the UN General Assembly was faced with an unprecedented situation in which the usually noncontroversial resolution authorizing these groups became a battling ground between a Russian-led resolution establishing an Open Ended Working Group (OEWG) in which any UN member state can participate and a US-led resolution continuing the traditional approach of a restricted GGE that meets behind closed doors. Ironically, Russia became the champion of the new, more transparent and inclusive process while the US was backfooted in having to advocate continuation of the limited, opaque GGE process. A befuddled General Assembly ended up adopting both resolutions, despite their almost identical mandates and the practical strain two processes place on UN resources and policy coherence.
It seems likely that the OEWG, with its earlier commencement and reporting deadline (fall 2020), is going to eclipse the GGE (not due to report out until 2021). The open nature of the working group, with its possibility for many more states to become involved, will likely raise the profile of state conduct in cyberspace at a time when this unique environment is becoming ever-more militarized.
Who’s at the table?
The initial session of the OEWG, held in September, demonstrated that “openness” is a relative concept at the UN. Some 18 NGOs that had requested accreditation to attend the proceedings were refused. This exclusion was a result of opposition from unnamed member states, presumably because some of these NGOs, like the University of Toronto’s Citizen Lab, had highlighted cyber-enabled abuses of human rights by certain states.
The four civil society organizations that were permitted to attend (because they have pre-existing consultative status with the UN) tried to appeal to the collective interest in preserving cyberspace as a domain for peaceful purposes, as opposed to “war-fighting,” as the US has officially characterized it. The Women’s International League for Peace and Freedom voiced its concern with “the militarisation of cyber space” and its support of “solutions that move us closer to cyber peace.” ICT4Peace (with which the author is affiliated) called for the operationalization of the norms proposed by the earlier GGEs and in particular the prohibition on targeting critical infrastructure on which the public depends. With reports of rival cyber powers installing malware in each other’s electricity grids, ICT4Peace advocates that the prohibition on cyber operations against critical infrastructure should be respected at all times, and that states should publicly pledge to honour this restraint.
The national statements made and working papers submitted during the week-long inaugural session suggest that existing policy divides among leading cyber powers are persisting in the new context and will make it challenging for the OEWG to fulfil its mandate to further develop “norms, rules and principles of responsible state behaviour in cyberspace.” These policy differences principally revolve around the extent of sovereign control of cyberspace and the degree to which states are willing to accept restraints on their cyber operations abroad.
What about sovereignty?
The issue of sovereign control of cyberspace has been a long-standing point of debate amongst states. States in the West (broadly understood) have tended to advocate the free flow of information via the Internet and minimal controls over the activity of users. States of a more authoritarian character have espoused the concept of “information security” and stressed the right of states to safeguard their “information space.”
Typical of this orientation was the working paper submitted by China that affirmed the right of states “to make ICT-related public policies consistent with national circumstances to manage their own ICT affairs and protect their citizens’ legitimate interests in cyberspace.” ‘Protection’ in this case, the Chinese paper made clear, was from states “using ICTs to interfere in internal affairs of other states and undermine their political, economic and social stability.” In a similar vein, Iran affirmed the primary responsibility of states for maintaining a secure ICT environment and warned against states “with subversive aims [which] attempt to overtly or covertly use cyberspace to intervene in the political, economic and social affairs and systems of other states.” There is no readily available standard by which the international community can judge what type of information would be de-stabilizing, and such decisions will remain the preserve of the sovereign states themselves.
“A sharp uptick in the conduct of offensive cyber operations by states has occurred in recent years.”
The degree to which states are prepared to accept constraints on their foreign cyber operations is another open question. Although the utility of cyberspace for achieving a wide array of benefits for humanity is widely acknowledged, it has not gained the status of a ‘global commons’ reserved for ‘peaceful purposes’ akin to that agreed in multilateral treaties for the Antarctic and outer space. A sharp uptick in the conduct of offensive cyber operations by states, including those which can produce destructive effects as well as those aimed at political and social disruption, has occurred in recent years. This activity has largely been carried out covertly, with only a handful of states offering any transparency as to the policies and doctrine governing such offensive cyber operations. The negative impact of such activity is not lost on the majority of UN member states, which are conscious of the fact that they are both vulnerable to such attacks and lack the means to retaliate if affected by them. At the OEWG, Indonesia on behalf of the Non-aligned Movement — the group of 120 states from the developing world — expressed its concern over “the militarization and weaponization of cyberspace through the development of cyber offensive capabilities in a manner that would turn cyberspace into a theater of military operations.”
Echoing these concerns, China decried that “some states take cyberspace as a new battlefield.” Russia warned that “cyber confrontation is on the rise, and if we fail to find joint efforts [and] effective ways to fight these threats, the global cyberwar will be just down the road.” Iran tried to leverage its status as the initial victim of a state conducted destructive attack (the “Stuxnet” episode targeting its nuclear program) in denouncing “certain states with offensive doctrines [which] violate the prohibition of the use of force against other countries.” Employing a phrase of questionable taste, the Iranian statement referred to the country as being “the first cyber Hiroshima in the world.”
Although the above-mentioned states are suspected of engaging in offensive cyber operations of their own, their criticism of militarization of cyberspace creates presentation problems for those Western states which openly acknowledge that they have developed offensive military cyber capabilities. Statements by states such as the UK, Australia and the Netherlands have affirmed that they possess offensive capabilities while asserting that these are employed only in a manner compatible with their obligations under international law. These states assert that they are prepared to respect the norms of responsible state behaviour that have been generated by the UN process to date.
How cyber operations fit into international law
This issue of what constitutes responsible state behaviour in cyberspace is further complicated by problems with the scope of international law and attribution for cyber operations. While earlier GGEs have affirmed the applicability of international law to cyber activity, the exact nature of that applicability remains in dispute. In a situation of armed conflict, it is generally recognized that international humanitarian law would apply to cyber operations. The International Committee of the Red Cross reaffirmed this in its statement to the OEWG: “There is no question that cyber operations during armed conflicts are regulated by international humanitarian law – IHL – just like any other weapon or means or methods of warfare used by a belligerent in a conflict.” A grey zone exists, however, regarding cyber operations below the threshold of armed conflict, and the right of states to take counter measures against cyber actions directed at them that they view to be hostile.
The legal uncertainty is compounded by the problem of attributing an offensive cyber operation to a specific state. To date, attribution has been at the discretion of individual states with no neutral forum available to judge the merits of the accusation levied by one state against another. The UN Secretary-General has called for the peaceful settlement of cyber conflict and has advocated “fostering a culture of accountability,” but as ICT4Peace pointed out in its statement, in the absence of a mechanism for the impartial attribution of wrongful cyber acts, it is very difficult to hold states to account.
Canada’s role and moving forward
Canada was an active participant in the OEWG, delivering a statement as well as submitting a working paper. It was one of only a few states that criticized the exclusion of the 18 NGOs and called for consideration of gender equality issues in the OEWG’s work in addition to human rights concerns, such as the need to protect human right defenders from being targeted using digital technology.
Canada also stressed the importance of keeping the OEWG focused on the operationalization and implementation of the norms already identified by the previous GGEs rather than diluting its work in the pursuit of further norms. Given the existing tensions amongst the leading cyber powers, it will require sustained leadership by middle powers like Canada (and Australia, which has made an impressive investment in its international cyber diplomacy) to narrow the prevailing policy fractures and promote common understandings.
Putting norms of responsible state behaviour into practice will require not only the efforts of concerned states. The private sector and civil society have a vital stake in ensuring that cyberspace doesn’t become just another battleground. These constituencies are slowly beginning to mobilize their lobbying efforts directed at governments and will need to sustain the pressure. An example of an initiative to preserve a peaceful cyberspace that bridges the public and private sectors is the Paris Call for Trust and Security in Cyberspace launched by France last November. This set of principles for responsible state behaviour in cyberspace has now been endorsed by 74 states, 333 international and civil society organizations and 608 private sector entities. Such a broad-based coalition is a welcome addition to global discussions but it is worrisome that many key states are missing from the list of state supporters (China, Russia, the US, India, Iran, Brazil and South Africa, to name a few). These hold-outs will eventually have to be brought on board if UN-negotiated “norms of responsible state behaviour” are ever to be effective.
The next session of the OEWG in early December is to be devoted to receiving input from the private sector and civil society. Much will depend on how these inputs are fashioned and on the willingness of states to embrace their appeals for responsible behaviour in cyberspace. Whether the current “Wild West” of cyber operations is to give way to “Peace, Order and Good Government” is still very much an open question.
Paul Meyer is International Security Fellow, Simon Fraser University
Keeping your medical secrets
Wearable technology covers a broad area of devices. With its use becoming more common in the healthcare sector, the issue concerning privacy becomes more crucial. New devices can help physicians monitor patients’ vital signs; sleep patterns and heart rhythms remotely transforming the face of medicine as we know it. These developments in technology will help detect early signs of diseases and aid in diagnosing medical conditions. Essentially these devices are mini computers that send and receive data which can be used for further analysis.
This is a company that delivers iOT solutions…it might be worth investing in…
https://www.st.com/content/st_com/en.html
Waiting for Carrington!
This has to be one of the big issues that have been ignored too long. Our whole lives are getting electrified and dependent on a single system to provide the electricity. Have you heard of the Carrington Event? Well, solar bursts of radiation like that are not controllable. You have to plan for them. Fortunately, the waves are slow enough to give us a few minutes warning time. There was one blackout a few years ago in Quebec and now electric companies are preparing to protect their transformers from saturation caused by such events, which do have to be expected. The situation is far from hopeless, but it is very expensive to prepare a grid to withstand such a crisis, so that is not being done as widely as it should be. After all, there are other natural disasters (hurricanes, etc) that can cause just as much damage to the grid.
Getting ahead of the Christchurch Call
By Alistair Knott, Newsroom, Oct 20, 2019
https://www.newsroom.co.nz/2019/10/10/850847/getting-ahead-of-the-christchurch-call
Instead of using what amounts to censorship, tech companies signed up to the Christchurch Call would be wise to adopt a more preventative tactic, writes the University of Otago’s Alistair Knott:
We have heard a lot recently from the world’s tech giants about what they are doing to implement the pledge they signed up to in the Christchurch Call. But one recent announcement may signal a particularly interesting development. As reported in the New Zealand Herald, the world’s social media giants ‘agreed to join forces to research how their business models can lead to radicalisation’. This marks an interesting change from a reactive approach to online extremism, to a preventative approach.
Until now, the tech companies’ focus has been on improving their methods for identifying video footage of terrorist attacks when it is uploaded, or as soon as possible afterwards. To this end, Facebook has improved its AI algorithm for automatically classifying video content, to make it better at recognising (and then blocking, or removing) footage of live shooting events. The algorithm in question is a classifier, which learns through a training process. In this case, the ‘training items’ are videos, showing a mixture of real shootings and other miscellaneous events.
The Christchurch Call basically commits tech companies to implementing some form of Internet censorship. The methods adopted so far have been quite heavy-handed: they either involve preventing content being uploaded, or removing content already online, or blocking content in user search queries. Such moves are always closely scrutinised by digital freedom advocates. Companies looking for ways to adhere to the Christchurch pledge are strongly incentivised to find methods that avoid heavy-handed censorship.
Read more
In this connection, it is interesting to consider another classifier used by Facebook and other social media companies, which sits at the very centre of their operation. This is a classifier that decides what items users see in their feed. This classifier is called a recommender system. It is trained to predict which items users are most likely to click on.
There is some evidence that recommender systems have a destabilising effect on currents of public opinion. This is because the training data for a recommender system is its users’ current clicking preferences. The problem is that recommender systems also influence these preferences, because the items they predict to be most clickable are also prioritised in users’ feeds. Their predictions are in this sense a self-fulfilling prophecy, amplifying and exaggerating any preferences detected by users.
This effect may cause recommender systems to polarise public opinion, by leading users to extremist positions. As is well-known, people have a small tendency to prefer items that are controversial, scandalous or outrageous – not because they are extremists, but just because it’s human nature to be curious about such things. This small tendency can be amplified by recommender systems. Obviously, social media systems aren’t responsible by themselves for extremism. But there’s evidence they push in this direction. A recent study from Brazil is particularly convincing, showing that Brazilian YouTube users consistently migrate from milder to more extreme political content, and that the recommender algorithm supports this migration.
Tech companies certainly don’t design their recommender systems to encourage extremism. The systems are simply designed to maximise the amount of time users spend viewing content from their own site – and thus to maximise profits from their advertisers. A tech company’s recommender system is a core part of its business model. This is why it’s so interesting to hear reports, for the first time, that social media companies are beginning to question whether their ‘business model’ can lead to extremism.
It’s conceivable that very small changes in recommender algorithms could counteract their subtle effects in tilting public opinion towards extremism. Any such changes would still be a form of ‘Internet censorship’. But they are a very light touch. There is no question of deleting material from the Internet, or preventing uploads, or blocking users’ search requests. In fact, there is no denial of user requests at all, since recommender systems already deliver content unbidden into users’ social media feeds. Recommender systems are already making choices on behalf of users. But at present, these choices are driven purely by tech companies’ drive to maximise profits. What’s being contemplated are subtle changes to these systems, that take into account the public good, alongside profits.
As well as being less heavy-handed in censorship terms, these changes also have a preventative flavour, rather than a reactive one. Rather than waiting for terrorist incidents and then responding, the proposed changes act pre-emptively, to diffuse the currents that lead to extremism. They are very appealing from this perspective too.
The question of how recommender algorithms could be modified to defuse extremism is an important one for debate, both within tech companies, and in the public at large. The tech companies are best placed to run experiments with different versions of the recommender system and observe their effects. (They routinely do this already.) The public should have a role in discussing what sorts of extremism should be counteracted. (There’s presumably no harm in being an extreme Star Wars fan.) The crucial thing is to begin a discussion between the tech companies and the public they claim to serve. We hope we are seeing the beginnings of this discussion in the recent announcement.
The Cybersecurity 202: Democrats hope new report on election interference will prompt action
By Joseph Marks, Washington Post Oct. 9. 2019
THE KEY
Some Senate Democrats see an opening to take action on election interference following the release yesterday of a mammoth new report outlining Russian disinformation efforts in the 2016 election.
Senate Majority Leader Mitch McConnell (R-Ky.) has been blocking the most expansive of those efforts, arguing that states shouldn’t be forced to adopt federal election mandates. But the Republican leader has been less adamant about combating disinformation.
But some key Democrats say the second volume of the Senate Intelligence Committee’s bipartisan report — which offers a damning portrait of efforts by Russia’s propaganda arm that only increased after the election — provides an opening. Activity by accounts linked to Russia’s notorious Internet Research Agency jumped 200 percent on Instagram after the election and 50 percent on Facebook, Twitter and YouTube, the committee found.
“There’s a kind of new awakening, if you will, about the magnitude of the problem we’re facing and I certainly hope this report will help stimulate some movement,” Sen. Angus King (I-Maine), a member of the committee, told me.
The timing of the report’s release could also be auspicious. McConnell recently eased his hard line by endorsing $250 million in extra money for states to secure their elections but without mandating any particular cybersecurity protections such as paper ballots or post-election audits.
“There may be some softening of his position and the fact this report was unanimous is very significant. This came from a very diverse committee and the findings are unequivocal,” King said.
McConnell’s far from on board yet, however.
A McConnell spokesman declined to discuss specific disinformation bills and pointed me to a CNBC interview last week in which McConnell praised Trump administration efforts to protect the 2018 election as “a big success story” and said he’s “convinced we’re ready for 2020.”
“Any foreign country that messes with us will have a serious problem in return,” McConnell said.
Read more
The new report, spearheaded by Intelligence Chairman Richard Burr (R-N.C.) and ranking Democrat Mark Warner (Va.), outlines several ways lawmakers could take action to tighten election security. It presses for Congress to consider legislation to help block Russia and other adversaries from buying online political ads and to foster cooperation between social media companies and law enforcement — though it doesn’t endorse any particular bills.
And some Republicans have previously supported more robust action aimed at 2020.
Soon after the report came out, Sen. Amy Klobuchar (D-Minn.) took to Twitter to tout the Honest Ads Act, which would mandate transparency about who pays for online political advertisements. Trump ally Sen. Lindsey O. Graham (R-S.C.) is a co-sponsor.
A Burr aide declined to speculate on any next steps the Senate might take and Graham’s office didn’t respond to a query about whether he’ll push the Senate to consider his bill.
Warner also put out a clarion call for Senate action, warning that “Congress must step up and establish guardrails to protect the integrity of our democracy.”
Senate Minority Leader Chuck Schumer (D-N.Y.) said the report “makes it crystal clear to everyone that Vladimir Putin exploited social media to spread false information in the 2016 elections and that the Senate must take action to ensure Americans know who is behind online political ads to help prevent it from happening again.”
He also attacked McConnell for “block[ing] a full-throated U.S. response.”
Congressional action is also likely to be complicated by the House impeachment inquiry into whether Trump improperly leaned on Ukraine to investigate his political rivals. Trump has routinely lashed out at any talk of Russia’s 2016 interference operations, arguing that it takes away from his come-from-behind election victory.
And Trump’s push back on disinformation could make those efforts even more effective in 2020.
“One of the things we found as a committee is that probably the best defense against disinformation is citizens being informed about the fact they’re being misinformed … and taking it with a grain of salt,” King told me. “For the president to continue to deny this, he’s disarming the country. He’s disarming one of the best defenses we have.”
You are reading The Cybersecurity 202, our must-read newsletter on cybersecurity policy news.
PINGED, PATCHED, PWNED
FBI headquarters in Washington. (Matt McClain/The Washington Post)
PINGED: A secret intelligence court ruled last year that some FBI searches of thousands of pieces of raw intelligence violated the constitutional rights of Americans, Dustin Volz and Byron Tau at the Wall Street Journal report. The ruling — disclosed by the intelligence community Tuesday — marks a rare censure of U.S. surveillance activities and prompted fresh criticism of the FBI’s oversight of the program.
The Foreign Intelligence Surveillance Court found between 2017 and 2018 that the FBI was conducting searches targeting Americans that may have violated the Fourth Amendment, which protects against unreasonable searches. The searches also could have run afoul of the law authorizing the program, which requires that warrantless searches of the surveillance database be backed by criminal investigations or in pursuit of foreign intelligence information. In one case, a contractor used the highly secretive database to search for himself and relatives, Dustin and Byron report.
The disclosure of the ruling reignited criticism of the controversial legal provision authorizing the program, Section 702 of the Foreign Intelligence Surveillance Act. Some senators argued the revelations underscore why Section 702 should not have been renewed earlier this year.
“Today’s release demonstrates how baseless the FBI’s position was and highlights Congress’ constitutional obligation to act independently and strengthen the checks and balances on government surveillance,” Sen. Ron Wyden (D-Ore.) wrote. He also expressed concern that the remaining redacted portions of the court opinion contains additional information “the public deserves to know.”
Rep. Justin Amash (I-Mich.), who moved to let the program expire, criticized President Trump for pushing to reauthorize it.
PATCHED: Twitter may have “inadvertently” taken emails and phone numbers users shared with the company for cybersecurity purposes and used them for advertising, the company revealed in a blog post yesterday. The company says it didn’t share any personal data with the advertisers and that it ended the practice last month.
But the privacy gaffe could still land the company in hot water with federal regulators, my colleague Tony Romm reports. The Federal Trade Commission penalized Facebook in a similar case for failing to disclose that it took phone numbers users provided to verify their identities and used them to target the users with advertisements. The FTC could also slap Twitter with heavy fines if the agency finds the recent incident violates the terms of a 2011 settlement between the agency and the social media giant.
This isn’t Twitter’s first recent data security problem. The company temporarily accidentally disabled settings allowing users to protect tweets for Android users in January. Months later the company revealed it was “inadvertently collecting and sharing iOS location data” with an unnamed party. The company fixed both issues.
California Secretary of State Alex Padilla. (Rich Pedroncelli/Associated Press)
PWNED: California’s top election official is urging political parties to ramp up cybersecurity protections, including boosting employee training and mandating secure logins to protect upcoming primaries and the 2020 geleral elections.
“Elections administrators cannot be alone in the fight against malicious actors who seek to undermine our elections,” California Secretary of State Alex Padilla writes in letters to Repulican and Democratic party chairs.
The letter comes as national political parties are also boosting efforts to ensure campaigns are protected against foreign hacking in 2020. The Democratic National Committee has issued a checklist of basic protocols for campaigns and organized trainings. The National Republican Congressional Committee has pledged hands-on cybersecurity assistance for campaigns.
But hacking efforts may just be getting started. Just last week, Microsoft revealed Iranian hackers had already targeted an undisclosed presidential campaign.
Solar Storms and Cyber-Security
What role would geomagnetic and solar storms have on cyber-security? In 1859, a large solar storm hit Earth – causing the electronics of the day (such as telegraphs) to go haywire. In more recent times (Cold War era, etc.) – atmospheric conditions and solar flares have almost sparked nuclear exchanges. Are current cyber systems shielded adequately from these phenomenon? Are operators able to identify these phenomenon vs. hostile attacks?
I think perhaps one of the earliest examples of cyber-warfare was the intercepted Zimmerman telegram in 1917 – between Germany and Mexico. Are there other examples of pre-internet “cyber” (electric, digital, etc.) warfare that should be considered within these contexts?
Should he mention it or not?
Rep. Eric Swalwell (D-Calif.) applauded the crowd of cybersecurity researchers uncovering dangerous bugs in voting machines and other election systems at a security conference here — but he’s in a bind about how to talk about election security with constituents. He believes chances are almost nil that Republicans will join Democrats to pass legislation mandating fixes to improve election security before the 2020 contest. By banging the drum about potential security weaknesses, he worries Democrats may convince citizens that the election is bound to be hacked — and that there’s no point in voting.
The NSA Must Share More Info (with YOU?)
Maybe the NSA is good for something. At least now they are intending to share more information. (With whom?) Here’s another piece in the Washington Post by Joseph Marks, who certainly is following these affairs closely. ]
“New NSA cyber lead says agency must share more info about digital threats,” Sept. 5.
THE KEY
The NSA is the U.S. government’s premier digital spying agency and it has a well-earned reputation for keeping secrets. But the agency needs to stop keeping so many things confidential and classified if it wants to protect the nation from cyberattacks.
That’s the assessment from Anne Neuberger, director of NSA’s first Cybersecurity Directorate, which will launch Oct. 1 and essentially combine the work of many disparate NSA divisions dealing with cybersecurity, including its offensive and defensive operations.
The directorate’s mission is to “prevent and eradicate” foreign hackers from attacking critical U.S. targets including election infrastructure and defense companies, Neuberger said yesterday during her first public address since being named to lead the directorate in July.
Read more
Neuberger acknowledged the difficulty of her mission during an onstage interview at the Billington Cybersecurity Summit, but also said the growing hacking threats from Russia, China and other U.S. adversaries mean the nation “must” achieve it.
“The nation needs it … the threat demands it and the nation deserves that we achieve it,” Neuberger said.
That mission also means, however, that NSA, which was once colloquially known as “no such agency” and has traditionally kept mum to protect its own hacking operations and secret sources, must start sharing more threat data with cybersecurity pros in the private sector, she said. And the NSA will have to share that information far more quickly than it has in the past when many recipients hcomplained that, by the time they get the information, it’s no longer useful, she said.
In some instances, the agency will have to look for “creative approaches” to share that information, Neuberger told reporters after her talk.
For instance, the agency may look for ways to present cybersecurity threat information so it can’t be traced back to the person or group that shared it, she said. Or the agency may look for cybersecurity companies that have the same information but from a different source and highlight those reports.
The new directorate is, in part, an acknowledgement that over the course of several previous reorganizations the spying agency hasn’t focused enough on protecting U.S. organizations from foreign cyberattacks, NSA chief Gen. Paul Nakasone told the Wall Street Journal when he announced the new direcorate in July.
Neuberger learned how vital it is to share information about hacking threats during the run-up to the 2018 midterm elections when she was co-leader of an election security task force that combined the work of NSA and U.S. Cyber Command, the military’s hacking wing.
“A particular lesson was that we have to proactively work with private-sector partners, for example social media companies … to help them understand what they’re up against,” she said.
In that effort, which NSA wants to repeat in 2020, the agency frequently shared information about hacking operations and social media influence operations with the FBI, which then passed the information along to social media companies and others to help them defend themselves, she said.
“Those companies have to invest in the problem themselves … but, when they’re up against a nation-state, there are some insights and information that we should share … to enable them to look for that information on their platforms and shut it down,” she said.
In addition to safeguarding the 2020 elections, Neuberger said, the Cybersecurity Directorate will focus heavily on protecting defense companies, which have been extensively targeted by Chinese hackers looking to copy U.S. advances in military technology.
The directorate will also focus on disrupting foreign ransomware rings, she said, which lock up organizations’ computer files and refuse to release them until the victims pay a ransom.
Ransomware attackers have increasingly been targeting specific industries, she said, and the NSA is worried U.S. adversaries could try to use ransomware to disrupt the 2020 elections by locking up some vital systems on Election Day.
Hybrid Warfare
Excerpt:
“Misinformation poses the most serious risk, says Futter, to “those ICBMs in the US and Russia that only need a few minutes to go.” Simple interference in communications – Unal points to satellites as a potential weak point – could be enough to stop the most important military decisions being made with a cool head. “Keeping weapons on high alert in a cyber environment,” says Futter, “is an enormous risk.”
Beyza Unal recalls the story – related memorably in David E. Hoffman’s Pulitzer-winning investigation of automatic nuclear systems, Dead Hand – of the most cool-headed decisions of the Cold War. The Russian lieutenant-colonel Stanislav Petrov was in charge of the Serpukhov-15 early warning station on the night in September 1983 when the Soviet Union’s satellites, sending data to the country’s most powerful supercomputer, registered a nuclear attack by the US. Despite being warned that five ICBMs were on their way to the USSR, Petrov told the decision-makers above him that the signals were a false alarm. “And he was right,” says Unal. “But a cyberattack could look like that, a spoofing of the system. Some say that humans are the weakest link in cyber issues. I say humans are both the weakest link and the strongest link. It depends on how you train them.””
and
“In the spring of 2013, a Ukrainian army officer called Yaroslav Sherstuk developed an app to speed up the targeting process of the Ukrainian army’s Soviet-era artillery weapons, using an Android phone. The app reduced the time to fire a howitzer from a few minutes to 15 seconds. Distributed on Ukrainian military forums, the app was installed by over 9,000 military personnel.
“By late 2014, however, a new version of the app began circulating. The alternate version contained malware known as X-Agent, a remote access toolkit known to be used by Russian military intelligence. The cyber security firm Crowdstrike, which discovered the malware, said that X-Agent gave its users “access to contacts, SMS, call logs and internet data,” as well as “gross locational data”. In the critical battles in Donetsk and Debaltseve in early 2015, the app could have shown Russian forces where Ukraine’s artillery pieces were, who the soldiers operating them were talking to, and some of what they were saying. It may be, then, that Russia’s concern – Futter describes it as “panic” – about the risks of hybrid warfare is based on the knowledge that it has been used in battle, and it works.”
Canadian Security
I had only learned recently of the CSA(Canadian Security Agency) recently as my education in Information security demanded it. I did search on it and realized the agency’s name might have been miscommunicated or misinterpreted by me…and it was actually the CSE(Communications Security Establishment which I found the website for.
It has a very interesting site (https://www.cse-cst.gc.ca/en/careers-carrieres) which I briefly looked over. The gist of it all is I am happy to know we have such an agency to watch over our national boundaries and protect us from Cyber threats abroad from Russia and China and even some of our friendly neighbors whoever they may be. So many conflicting technical standards produce wide gaping holes in our technical information communication infrastructures not to mention software bugs and malicious virus activity. The average computer user is in a difficult position and has to make use of available protection software to keep themselves safe. That requires an awareness of what products are available and learning how they are used. Products like AVAST, AVGand McAfee are offering now not just antivirus but tool suites to cope with potential computer intrusions. And it seems like new tools are rolled out quickly and I find myself doing searches on browsers that have high security …like epic, brand and the like that don’t track my information. Connection through vpn’s seems to be encouraged but all these things if free usually cost the price of sales pitches and repeated upgrade offers. Choose your tools wisely and guard your IT footprint.
Spreading Political Misinformation
We’d better worry, not only about the military application of Internet skulduggery, but even the inadvertent consequences of its normal use. This research shows that Bolsonaro’s victory in Brazil may be largely caused by the spread of misinformation from YouTube through WhatsApp among Brazil’s poor. So what kind of action can be taken against this?
https://www.nytimes.com/column/the-interpreter/
From Paul Meyer:
ICT4Peace
This is the submission by ICT4Peace, written by Paul Meyer for the UN Open-Ended Working group on Cyber Security, which will begin its work in September. (The UN Office of Disarmament Affairs has now posted it to the official site for the OEWG: https://www.un.org/disarmament/open-ended-working-group/ .)
Here is the submission itself:
1ICT4Peace Submission to theUNOpen Ended Working Group (OEWG)on ICT and International Security
We commend the OEWG’s openness to input from civil society, academia and the private sector and ICT4Peace will look forward to contributing to its work through a sustained dialogue. The 2015 report of theUNGroup of Governmental Experts (GGE) noted that even as ICTs have grown in importance for the international community, “there are disturbing trends that create risks to international peace and security. Effective cooperation amongst states is essential to reduce these risks”. More recently, the Secretary General, in connection with his Agenda for Disarmament, has warned that malicious activity in cyberspace has already been directed at critical infrastructure with serious consequences for international peace and security.
It is incumbent on the international community to work to counter such threats and to ensure the “secure and peaceful ICT environment” that your authorizing resolution (A/RES/73/27) stipulates. The OEWG represents the latest installment of the 20-yearUN endeavour to address developments in ICTs in the context of international security. This effort has yielded some important results, notably the consensus GGE reports of 2010, 2013, 2015. Yet these positive findings have not been adequately reflected in the actual conduct of states in pursuit of a “militarization” of cyberspace. With increasing reports of state-conducted offensive cyber operations including the targeting of critical infrastructure in other countries, promoting adherence in practice to UN identified norms of responsible state behaviour is vital. If the international community is to foster digital human security alongside cybersecurity for states it will need to keep pace with these developments and ideally steer them towards cooperative ends.
2It is our hope and expectation that the OEWG will deliver results that tangibly contribute to conflict prevention and preserve cyberspace as a realm for peaceful purposes. In doing so it will need to build on the accomplishments of the past, while “further developing” these and promoting their implementation. ICT4Peace believes the following norms merit priority attention:
1.Non-targeting of critical infrastructure including devising common understandings as to what constitutes such infrastructure.
Read more
3.Non-involvement of these Emergency Response Teamsin offensive cyber operations.
4.Non use of proxies by states in conducting offensive cyber operations.
5.Responsibility of states to prevent or prosecute malicious cyber activity originating from their territory.
6.Commitment to a responsible disclosure of vulnerabilities to help preserve the integrity of cyberspace and transparent policies for handling such vulnerabilities.
7.Transparency of policy and doctrine governing state offensive cyber operations.In addition to developing these norms, which have already been generated by theUNGGE processes, we suggest that the OEWG also develop proposals for dealing with four other pressing problems: Attribution:The necessity for substantiation of “accusations of organizing and implementing wrongful acts brought against States” is acknowledged in Resolution 73/27, but if this norm is to be implemented it will require a reliable attribution mechanism. ICT4Peace sees merit in developing a neutral, international cyber attribution agency which could take the form of a public-private partnership drawing upon capabilities in the private sector. ICT4Peace has published a paper on this theme:
https://ict4peace.org/wp-content/uploads/2018/12/ICT4Peace-2019-Trust-and-Attribution-in-Cyberspace.pdfDisinformation, Hate Speech and political Interference:These actions affect every means of expression at both national and international levels, but ICTs, including social media, substantially increase their impact. Any norm in this regard to be observed in practice will require definitional and operational elaboration. As these issues are somewhat distinct from the international security context of the OEWG and could complicate its efforts, ICT4Peace suggests that separate fora may be tasked with this work.
3Export Controls:
There has been increasingly concern expressed about sophisticated cyber surveillance equipment being misused by some states to monitor individuals and impinge on their civil and privacy rights. ICT4Peace would like to see the OEWG develop a recommendation that would require states to include such equipment and software in their national export control regimes.AI and Cyber Security: The potential of Artificial Intelligence to amplify some of the problematic aspects of current state conducted cyber operations will require extension of the normative framework for responsible state behaviour in cyberspace to this potent new technology.The OEWG could draw upon the earlier work of the CCW’s GGE on Lethal Autonomous Weapons (LAWS) in formulating initial guidance in this regard. Finally, we would like to stress that the cumulative economic and financial cost of cyber incidents to national economies and in particular developing and emerging economies have become enormous. Therefore, it has become evident, that national cybersecurity building has become a necessary state function. However, many developing countries lack the necessary resources to build and maintain the required national cybersecurity institutions and technical and human capacities. Cybersecurity therefore must become a priority in national development strategies and cooperation agreements. The need for cybersecurity capacity building in developing countries has already been highlighted in the UN GGE 2015 report and should also be reflected in the OEWG outcomes.
Geneva, 4August 2019
Contact: Daniel Stauffacher, President, ICT4Peacedanielstauffacher@ict4peace.org
https://unoda-web.s3.amazonaws.com/wp-content/uploads/2019/08/ICT4PeaceBrief-OEWG-Aug42019.pdf
Bugs in the Plane
The Cybersecurity 202: Hackers just found serious vulnerabilities in a U.S. military fighter jet
By Joseph Marks (From Washington Post‘s The Cybersecurity 202) Aug 14.
And they did it with the Air Force’s blessing.
Read more
https://www.washingtonpost.com/news/powerpost/wp/category/the-cybersecurity-202/?wpisrc=nl_cybersecurity202&wpmm=1
Building Ethics, Not Bombs
The Role of Scientists and Engineers in Humanitarian Disarmament
By E. Golding
So is a scientist responsible for the harms caused by the military uses of their discoveries and inventions? How about the medical principle: “Do no harm”?
Read more
Importance of Real-Time Reports and Traceability in Software Testing
In this rather technical article for coders, Somesh Roy discusses the factors that cannot be resolved unless there are good reports kept that can be traced. (Or: How are you going to fix it if you can’t find it?)
https://www.kovair.com/blog/importance-of-real-time-reports-and-traceability-in-testing/?fbclid=IwAR1s9kVGSyRFgf7Mk4p695_iB6ohT-6BAbjxnzu9ZR8ttxlJG3wKNY2lJzE
Software companies rush to get their products to market, buggy or not
Yes, accidents do happen, even to careful people. But careful programmers and their demanding bosses can greatly reduce the bugginess of software. They will do so only when the law holds them responsible for bad results.
Warning! Please question this post!
There is something seriously wrong with this comment. It is at least VASTLY OUT OF DATE! For one thing, Bruce Blair died several months ago, and at that time he was certainly not advocating a better capacity for retaliatory strikes. He founded Global Zero, which promoted nuclear disarmament, nothing less. So I checked this reference and it was dated 1984! Why would you be posting something like this in 2020?
Should Trump wage cyber war?
There have been several news stories reporting speculations or insider information that Trump had used a cyberattack against Iran.
They did not seem to get much press coverage and no outrage at all. Whether you like Iran’s government or not, it will pay to think carefully about this kind of quasi-warfare. It if gets to be considered normal, we will have a much harder time putting a stop to it.
Schneier’s Advice
Carry On: Sound Advice from Schneier on Security
By Bruce Schneier.
Wiley, 2013
Up-to-the-minute observations from a world-famous security expert Bruce Schneier is known worldwide as the foremost authority and commentator on every security issue from cyber-terrorism to airport surveillance.
https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=schneier+on+security&oq=Schneier
Nuclear threat
Bruce Blair, OSTI.GOV. U.S. Department of Energy.
Strategic command and control: Redefining the nuclear threat
To many defense analysts, C/sup 3/I (command, control, communications and intelligence) is the most vulnerable component of our nuclear deterrent. Bruce Blair, who once served in the Strategic Air Command as a Minuteman launch control officer and is a current Defense Department official, has written an important and valuable analysis of the physical and organizational arrangements which exist to control U.S. strategic forces, tracing their evolution over 25 years. His recommendations call for (a) near-term improvements to assure that the system will not collapse under a Soviet first strike and will provide for prompt retaliation and (b) a long-term goal of delaying a retaliatory strike by at least 24 hours so as to maximize chances for survival.
https://www.osti.gov/biblio/5734349
I don’t understand any of this. It is gibberish to me. But it sounds like it could kill me — and everyone else too. Please can someone speak English here?
Trend Micro shared: Sept 17. 2015
“FBI Warns Public on Dangers of the Internet of Things”
“FBI Warns Public on Dangers of the Internet of Things”
In a Public Service announcement issued last week, the law enforcement agency discussed the potential security risks of using interconnected devices such as smart light bulbs, connected cars, smart fridges, wearables, and other home security systems. The PSA included network connected printers as well as fuel monitoring systems.
Last July, vehicle security researchers Chris Valasek and Charlie Miller demonstrated how a Jeep Cherokee’s brakes and other critical control systems can be remotely controlled by anyone with an internet connection. According to Valasek and Miller, they can easily take control of the vehicle by sending data to its interconnected entertainment system and navigation system via a mobile phone network. In response to this, Chrysler announced the recall of 1.4 million vehicles that may be affected by the security hole.
“What to Consider When Buying a Smart Device”, here a few more ways to improve the security of your devices against possible IoT threats:
* Enable all security features on all smart devices
* Always update the device firmware
* Use secure passwords
* Close any unused ports on devices and routers
* Utilize encryption for all networks and devices
https://www.trendmicro.com/vinfo/us/security/news/internet-of-things/fbi-warns-public-on-dangers-of-the-internet-of-things/
the IoT Risks are Coming!
The internet of things is a giant security risk because of the lack of security protocols built into smaller computerized devices. This makes these devices extremely vulnerable to script kiddies and people who understand some of the most basic gui based hacking or monitoring programs. As the technology and security protocols develop over time, such risks will not be as worrisome.
Internet Society. 15 October 2015
“The Internet of Things (IoT): An Overview.”
Understanding the Issues and Challenges of a More Connected World
On 15 October 2015 the Internet Society published this 50-page whitepaper providing an overview of the IoT and exploring related issues and challenges. You may download the complete document at the link above. The Executive Summary is included below to provide a preview of the full document.
More of our coverage and information about the Internet of Things may be found at https://www.internetsociety.org/issues/iot
This IoT Overview whitepaper is also available in Russian and in Spanish.
https://www.internetsociety.org/resources/doc/2015/iot-overview/
Cyberperson’s Code of Ethics
IEEE-CS/ACM Joint Task Force on Software Engineering Ethics and Professional Practices
Short version:
1. PUBLIC – Software engineers shall act consistently with the public interest.
2. CLIENT AND EMPLOYER – Software engineers shall act in a manner that is in the best interests of their client and employer consistent with the public interest.
3. PRODUCT – Software engineers shall ensure that their products and related modifications meet the highest professional standards possible.
4. JUDGMENT – Software engineers shall maintain integrity and independence in their professional judgment.
5. MANAGEMENT – Software engineering managers and leaders shall subscribe to and promote an ethical approach to the management of software development and maintenance.
6. PROFESSION – Software engineers shall advance the integrity and reputation of the profession consistent with the public interest.
7. COLLEAGUES – Software engineers shall be fair to and supportive of their colleagues.
8. SELF – Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession.
https://www.computer.org/education/code-of-ethics
Samrat Bhadra shared a post:
“Brute Force Attack Against FACEBOOK – How to Keep Your Facebook Account Safe”
Brute Force is a mechanism used to identify password of an online account (or a correct combination of user id and password in case of a application specific attack) by automatically generating possible passwords and attempting it on the target web application. The success probability of a brute force system is largely dependent on its intelligence of generating passwords in most effective sequence so the correct password can be identified with least number of trials.
Only less than 1% of all passwords are more than ten characters long. That’s what makes brute force a commercially viable solution. A strong password is what can keep your Facebook account safe from hackers.
http://blog.lamanguste.com/2017/12/12/brute-force-attack-against-facebook-how-to-keep-your-facebook-account-safe/
Samrat Bhadra shared a post.
“Critical Vulnerabilities in Microsoft Products is on the Rise”
The number of vulnerabilities in Microsoft products reported to be more than doubled from 325 in 2013 to 685 in 2017 as reported by Avecto in Microsoft Vulnerabilities Report 2017 . Moreover there has been a record 232 new windows vulnerabilities reported in this year.
Key Findings:
Removing admin rights would mitigate 80% of all Critical Microsoft vulnerabilities in 2017.
The number of reported vulnerabilities has risen 111% over five years (2013-2017).
There has been a 54% increase in Critical Microsoft vulnerabilities since 2016 and 60% in five years (2013-2017).
95% of Critical vulnerabilities in Microsoft browsers can be mitigated by removing administrator rights.
There has been an 89% increase in Microsoft Office vulnerabilities in the past five years.
Almost two thirds of all Critical vulnerabilities in Microsoft Office products are mitigated by removing admin rights.
Despite being widely regarded as the most secure Windows OS ever, Windows 10 vulnerabilities rose by 64% in 2017.
Removing admin rights would mitigate almost 80% of Critical vulnerabilities in Windows 10 in 2017.
Critical vulnerabilities in Microsoft Browsers are up 46% since 2013.
88% of all Critical vulnerabilities reported by Microsoft over the last five years would have been mitigated by removing admin rights.
http://blog.lamanguste.com/2018/02/19/critical-vulnerabilities-in-microsoft-products-is-on-the-rise/?
What does this mean: “removing administrator rights?” Maybe it means that too many people let strangers have the right to tamper with a website? Or does it mean that most administrators should have access to fewer opportunities to mess around with the browser? There are things here that might be useful if we understood more. Thank you.
Jon Fingas, @jonfingas
06.22.19 engadget
“US cyberattack reportedly knocked out Iran missile control systems”
The President reportedly signed off on the digital strike. Washington Post sources say the President greenlit a long-in-the-making cyberattack that took down Iranian missile control computers on the night of June 20th. The exact impact of the Cyber Command operation isn’t clear, but it was described as “crippling” — Iran couldn’t easily recover, one tipster said.
Source: Washington Post
https://www.engadget.com/2019/06/22/us-cyberattack-reportedly-knocked-out-iran-missile-control-syste/
Tyler Durden
Fri, 06/28/2019 –
“Florida City Pays $462,000 In Ransom After Second Cyberattack Cripples City’s Infrastructure”
Lake City’s council approved the measure during an emergency meeting Monday night and will be paying about $462,000 via Bitcoin, by way of the city’s insurer. This payment follows a similar incident in Riviera Beach, a city of 34,000 near West Palm Beach, where the city’s council authorized a similar $600,000 ransom payment.
Emergency services weren’t affected. But Lake City authorities worried they wouldn’t be able to access encrypted files such as ordinances, public-record requests and utility information.
The FBI advises against paying hackers, saying there’s no guarantee they will release data and that it could make victims susceptible to future attacks. But some victims don’t have a choice: for instance, in March, Jackson County, Georgia paid $400,000 after realizing a cyber attack had compromised its backups.