Overview: Cyber Threats

Read Article | Comments



Author: Paul Meyer

Chair, Canadian Pugwash Group | Senior Advisor, ICT4Peace

Cyberspace, the broad term for the system of networked computer systems for which the Internet is the chief embodiment, is a unique, human-created environment. The potential of information and communication technology to benefit humanity is vast and the growth in its use world-wide has been exponential. Today close to four billion people are connected to the Internet and a community of “netizens” has emerged.

Unfortunately, the growth of cyberspace has not been matched by a similar development of global governance for it. Even more worrisome, is the degree to which cyberspace has become “militarized” with states developing capabilities, not only for the defence of their own systems, but also offensive capabilities that threaten damage and destruction to entities beyond their borders. These trends within national security establishments of leading cyber powers have accelerated and the detrimental impact of cyber operations on civilian interests has grown. A narrative of “cyber war” has been espoused by major states, depicting this remarkable product of human ingenuity as just another “war-fighting domain”.

Read more

Video interview with Paul Meyer


Video credit: ICT4Peace Foundation. A longer interview is available on YouTube at https://youtu.be/BveJ3V1ADUo.

Subscribe
Notify of
25 Comments
Inline Feedbacks
View all comments

This is an interesting, yet exceptionally alarming article (opinion piece) by Dr. Bruce G. Blair – one of the co-founders of Global Zero.

I am cross-posting it to the Overview: War and Weapons and Plank 1 comments sections due to its intersectional relevance. It is alarming to consider how vulnerable the nuclear weapons systems have – and continue to be – and how even more vulnerabilities could be created via upgrading the nuclear weapons systems to newer computer systems.

Title: Why Our Nuclear Weapons Can Be Hacked
Author: Blair, Bruce G.
Publication(s): New York Times
Date: 14 March 2017
Link: https://www.nytimes.com/2017/03/14/opinion/why-our-nuclear-weapons-can-be-hacked.html

Article Excerpt(s):

“It is tempting for the United States to exploit its superiority in cyberwarfare to hobble the nuclear forces of North Korea or other opponents. As a new form of missile defense, cyberwarfare seems to offer the possibility of preventing nuclear strikes without the firing of a single nuclear warhead.

But as with many things involving nuclear weaponry, escalation of this strategy has a downside: United States forces are also vulnerable to such attacks.

Imagine the panic if we had suddenly learned during the Cold War that a bulwark of America’s nuclear deterrence could not even get off the ground because of an exploitable deficiency in its control network.

We had such an Achilles’ heel not so long ago. Minuteman missiles were vulnerable to a disabling cyberattack, and no one realized it for many years. If not for a curious and persistent President Barack Obama, it might never have been discovered and rectified.

In 2010, 50 nuclear-armed Minuteman missiles sitting in underground silos in Wyoming mysteriously disappeared from their launching crews’ monitors for nearly an hour. The crews could not have fired the missiles on presidential orders or discerned whether an enemy was trying to launch them. Was this a technical malfunction or was it something sinister? Had a hacker discovered an electronic back door to cut the links? For all the crews knew, someone had put all 50 missiles into countdown to launch. The missiles were designed to fire instantly as soon as they received a short stream of computer code, and they are indifferent about the code’s source.

It was a harrowing scene, and apprehension rippled all the way to the White House. Hackers were constantly bombarding our nuclear networks, and it was considered possible that they had breached the firewalls. The Air Force quickly determined that an improperly installed circuit card in an underground computer was responsible for the lockout, and the problem was fixed.

But President Obama was not satisfied and ordered investigators to continue to look for similar vulnerabilities. Sure enough, they turned up deficiencies, according to officials involved in the investigation.

One of these deficiencies involved the Minuteman silos, whose internet connections could have allowed hackers to cause the missiles’ flight guidance systems to shut down, putting them out of commission and requiring days or weeks to repair.

These were not the first cases of cybervulnerability. In the mid-1990s, the Pentagon uncovered an astonishing firewall breach that could have allowed outside hackers to gain control over the key naval radio transmitter in Maine used to send launching orders to ballistic missile submarines patrolling the Atlantic. So alarming was this discovery, which I learned about from interviews with military officials, that the Navy radically redesigned procedures so that submarine crews would never accept a launching order that came out of the blue unless it could be verified through a second source.

Cyberwarfare raises a host of other fears. Could a foreign agent launch another country’s missiles against a third country? We don’t know. Could a launch be set off by false early warning data that had been corrupted by hackers? This is an especially grave concern because the president has only three to six minutes to decide how to respond to an apparent nuclear attack.

This is the stuff of nightmares, and there will always be some doubt about our vulnerability. We lack adequate control over the supply chain for nuclear components — from design to manufacture to maintenance. We get much of our hardware and software off-the-shelf from commercial sources that could be infected by malware. We nevertheless routinely use them in critical networks. This loose security invites an attempt at an attack with catastrophic consequences. The risk would grow exponentially if an insider, wittingly or not, shares passwords, inserts infected thumb drives or otherwise facilitates illicit access to critical computers.

One stopgap remedy is to take United States and Russian strategic nuclear missiles off hair-trigger alert. Given the risks, it is dangerous to keep missiles in this physical state, and to maintain plans for launching them on early indications of an attack. Questions abound about the susceptibility to hacking of tens of thousands of miles of underground cabling and the backup radio antennas used for launching Minuteman missiles. They (and their Russian counterparts) should be taken off alert. Better yet, we should eliminate silo-based missiles and quick-launch procedures on all sides.

But this is just a start. We need to conduct a comprehensive examination of the threat and develop a remediation plan. We need to better understand the unintended consequences of cyberwarfare — such as possibly weakening another nation’s safeguards against unauthorized launching. We need to improve control over our nuclear supply chain. And it is time to reach an agreement with our rivals on the red lines. The reddest line should put nuclear networks off limits to cyberintrusion. Despite its allure, cyberwarfare risks causing nuclear pandemonium.”

I have seen a few news articles indicating that nuclear power plant workers are now working remotely. While the specific details of what they are doing from home were not elucidated upon in these news articles, I am alarmed to hear that this is occurring in several regions due to COVID-19 . I am wondering how secure these remote access systems are and whether they are vulnerable to cyber threats, such as hacking. Other regions are requesting that nuclear power plant staff stay on site for prolonged time periods as to prevent avenues for COVID-19 entering the workforce – which could be detrimental in regions with limited and/or small numbers of trained staff.

The Nuclear Threat Initiative has published several articles on the risk of cyber threats to nuclear facilities, including:

Title: Cyber Threat to Nuclear Facilities
Author:
Publication(s): Nuclear Threat Initiative
Date: 3 September 2018
Link: https://ntiindex.org/news-items/cyber-threat-to-nuclear-facilities/

Title: Defenses Against The Cyber Threat Remain Insufficient
Author:
Publication(s): Nuclear Threat Initiative
Date: 2018
Link: https://ntiindex.org/data-results/key-trends/cyber-defenses/

While this article is not precisely related to cyberattacks, it is an alarming notion of cyber-security – namely that information is being censored during COVID-19 on many social media platforms within the Chinese context. This potentially translates to health, safety, and security concerns for populations offline. What are folks thoughts on this?

Title: Censored Contagion: How Information on the Coronavirus is Managed on Chinese Social Media
Author: Ruan, Lotus; Knockel, Jeffrey; and Crete-Nishihata, Masashi
Publication(s): The Citizen Lab (University of Toronto)
Date: 3 March 2020
Link: https://citizenlab.ca/2020/03/censored-contagion-how-information-on-the-coronavirus-is-managed-on-chinese-social-media/
Note(s): The article is quite long, but is interesting. There are additionally interesting and relevant illustrations.

Article Excerpt(s):

From the Key Findings Section:

1) “YY, a live-streaming platform in China, began to censor keywords related to the coronavirus outbreak on December 31, 2019, a day after doctors (including the late Dr. Li Wenliang) tried to warn the public about the then unknown virus.

2) WeChat broadly censored coronavirus-related content (including critical and neutral information) and expanded the scope of censorship in February 2020. Censored content included criticism of government, rumours and speculative information on the epidemic, references to Dr. Li Wenliang, and neutral references to Chinese government efforts on handling the outbreak that had been reported on state media.

3) Many of the censorship rules are broad and effectively block messages that include names for the virus or sources for information about it. Such rules may restrict vital communication related to disease information and prevention.”

From the Article Itself:

(Regarding one of the methods of censorship):

“YY censors keywords client-side meaning that all of the rules to perform censorship are found inside of the application. YY has a built-in list of keywords that it uses to perform checks to determine if any of these keywords are present in a chat message before a message is sent. If a message contains a keyword from the list, then the message is not sent. The application downloads an updated keyword list each time it is run, which means the lists can change over time.

WeChat censors content server-side meaning that all the rules to perform censorship are on a remote server. When a message is sent from one WeChat user to another, it passes through a server managed by Tencent (WeChat’s parent company) that detects if the message includes blacklisted keywords before a message is sent to the recipient. Documenting censorship on a system with a server-side implementation requires devising a sample of keywords to test, running those keywords through the app, and recording the results. In previous work, we developed an automated system for testing content on WeChat to determine if it is censored.”

[…]

“On December 31, 2019, a day after Dr. Li Wenliang and seven others warned of the COVID-19 outbreak in WeChat groups, YY added 45 keywords to its blacklist, all of which made references to the then unknown virus that displayed symptoms similar to SARS (the deadly Severe Acute Respiratory Syndrome epidemic that started in southern China and spread globally in 2003).

Among the 45 censored keywords related to the COVID-19 outbreak, 40 are in simplified Chinese and five in traditional Chinese. These keywords include factual descriptions of the flu-like pneumonia disease, references to the name of the location considered as the source of the novel virus, local government agencies in Wuhan, and discussions of the similarity between the outbreak in Wuhan and SARS. Many of these keywords such as “沙士变异” (SARS variation) are very broad and effectively block general references to the virus.”

[…]

“Between January 1 and February 15, 2020, we found 516 keyword combinations directly related to COVID-19 that were censored in our scripted WeChat group chat. The scope of keyword censorship on WeChat expanded in February 2020. Between January 1 and 31, 2020, 132 keyword combinations were found censored in WeChat. Three hundred and eight-four new keywords were identified in a two week testing window between February 1 and 15.

Keyword combinations include text in simplified and traditional Chinese. We translated each keyword combination into English and, based on interpretations of the underlying context, grouped them into content categories.

Censored COVID-19-related keyword combinations cover a wide range of topics, including discussions of central leaders’ responses to the outbreak, critical and neutral references to government policies on handling the epidemic, responses to the outbreak in Hong Kong, Taiwan, and Macau, speculative and factual information on the disease, references to Dr. Li Wenliang, and collective action.”

This article bridges cyber-security and pandemics. It is alarming to hear that COVID-19 disinformation campaigns are being undertaken.

Title: Six Reasons the Kremlin Spreads Disinformation About the Coronavirus [Analysis]
Author: Kalenský, Jakub
Publication(s): Digital Forensic Research Lab (Atlantic Council)
Date: 24 March 2020
Link: https://medium.com/dfrlab/commentary-six-reasons-the-kremlin-spreads-disinformation-about-the-coronavirus-8fee41444f60

Article Excerpt(s):

“A recent internal report published by the European Union’s diplomatic service revealed that pro-Kremlin media have mounted a “significant disinformation campaign” about the COVID-19 pandemic aimed at Europe. Previous statements by Western officials, including acting U.S. Assistant Secretary of State for Europe and Eurasia Philip Reeker, warning of the campaign suggested that its contours were already visible by the end of February 2020.
The Kremlin’s long-term strategic goal in the information sphere is enduring and stable: undermining Western unity while strengthening Kremlin influence. Pro-Kremlin information operations employ six complementary tactics to achieve that goal, and the ongoing disinformation campaign on COVID-19 is no exception.

1. Spread anti-US, anti-Western, and anti-NATO messages to weaken them

Russian media started spreading false accusations that COVID-19 was a biological weapon manufactured by the United States in late January. The claim has appeared in other languages since then. This messaging is in line with decades of Soviet and Russian propaganda that has been fabricating stories about various diseases allegedly being a U.S. creation at least since 1949.

These messages aim to deepen anti-American, or more generally, anti-Western sentiment. Sometimes, the “perpetrator” is the entire NATO alliance, not just the United States, a variation that the DFRLab has traced in languages other than Russian as well. The impact on an average consumer of these messages will be approximately the same: anti-Western, anti-NATO and anti-U.S. feelings often go hand-in-hand in Europe.

2. Sow chaos and panic

In the aftermath of a tragedy or crisis, pro-Kremlin media outlets often try to incite fear, panic, chaos, and hysteria. On several occasions, in the aftermath of a terror attack in Europe or the United States, pro-Kremlin outlets spread accusations that the attack was a false flag operation conducted by various governments or secret services against its citizens, or that it was staged to impose greater control over the public.

These campaigns aim to stoke and exploit emotions, among which fear is one of the strongest. An audience shaken by fear will be more irrational and more prone to further disinformation operations.

3. Undermine the target audience’s trust in credible sources of information, be it traditional media or the government

Another messaging tactic tries to convince the target audience that the truth is different from whatever is being said by government institutions, local authorities or the media, thereby undermining trust in credible information sources. Convincing people to believe bogus sources of information first requires persuading them that real sources of accurate information cannot be trusted.

4. Undermine trust in objective facts by spreading multiple contradictory messages

According to a March 2020 review of COVID-19-related disinformation cases conducted by EUvsDisinfo, one popular pro-Kremlin narrative alleges, “[t]he virus is a powerful biological weapon, employed by the U.S., the Brits, or the opposition in Belarus.” A few days after the EUvsDisinfo report, pro-Kremlin outlets then accused Latvia of producing the virus. Spreading multiple and often contradictory versions of events undermines trust in objective facts.

The Kremlin has deployed this tactic liberally: after the MH17 tragedy, after the attack on an humanitarian convoy in Syria, and after the attempted murder of Sergei Skripal. The aim here is not to persuade people to believe one particular version of events, but to persuade the average consumer that there are so many versions of events that the truth can never be found. This tactic can be rather effective: then-U.S. presidential candidate Donald Trump has previously said that “no one really knows who did it” [i.e. shot down MH17] despite available evidence and statements by US authorities.

5. Spread conspiracies to facilitate the acceptance of other conspiracies

People who believe one conspiracy theory are more likely to accept others. If your job is to spread lies, it helps to promote other conspiracies as well. The pro-Kremlin media has a history of spreading conspiracy theories and elevating conspiracy theorists. A global pandemic that naturally leads to rumor-mongering is an ideal opportunity to spread some additional unfounded beliefs.

6. Identify the channels spreading disinformation

In his book on disinformation, Romanian defector Ion Mihai Pacepa described “Operation Ares,” which used U.S. involvement in Vietnam to spread anti-American feelings both within the United States and abroad in an effort to isolate the United States on the international scene.

“All we had to do was to continue planting the seeds of Ares and water them day after day after day,” Pacepa wrote. “Eventually, American leftists would seize upon Ares and would start pursuing it of their own accord. In the end, our original involvement would be forgotten and Ares would take on a life of its own.”

When you spread disinformation, you not only try to influence the audience — you also gain valuable information from the audience. You identify the channels through which disinformation spreads and the intermediaries that help disinformation reach new audiences. You also see who counters your disinformation. Especially in a time of crisis, when rumors spread faster and travel further than normal, a well-organized disinformation campaign can lend valuable insight into how an adversary’s information environment is organized. This insight is extremely valuable for any future disinformation operations. Knowing who will help you spread the desired information, and whom to try to discredit ahead of time, makes new disinformation campaigns easier to mount and sustain.

I saw an interesting video by Vice News about the vulnerability of water and wastewater (sewage) treatment plants. Apparently many of the systems are being digitized and monitored remotely. As such, they become increasingly vulnerable to cyberattacks. The video focused on some research in Israel around protecting these vital infrastructure locations and demonstrated how easy it is to hack the system. Alarming news to watch. What other infrastructure is vulnerable to cyber security threats?

Getting ahead of the Christchurch Call

By Alistair Knott, Newsroom, Oct 20, 2019
https://www.newsroom.co.nz/2019/10/10/850847/getting-ahead-of-the-christchurch-call

Instead of using what amounts to censorship, tech companies signed up to the Christchurch Call would be wise to adopt a more preventative tactic, writes the University of Otago’s Alistair Knott:

We have heard a lot recently from the world’s tech giants about what they are doing to implement the pledge they signed up to in the Christchurch Call. But one recent announcement may signal a particularly interesting development. As reported in the New Zealand Herald, the world’s social media giants ‘agreed to join forces to research how their business models can lead to radicalisation’. This marks an interesting change from a reactive approach to online extremism, to a preventative approach.

Until now, the tech companies’ focus has been on improving their methods for identifying video footage of terrorist attacks when it is uploaded, or as soon as possible afterwards. To this end, Facebook has improved its AI algorithm for automatically classifying video content, to make it better at recognising (and then blocking, or removing) footage of live shooting events. The algorithm in question is a classifier, which learns through a training process. In this case, the ‘training items’ are videos, showing a mixture of real shootings and other miscellaneous events.

The Christchurch Call basically commits tech companies to implementing some form of Internet censorship. The methods adopted so far have been quite heavy-handed: they either involve preventing content being uploaded, or removing content already online, or blocking content in user search queries. Such moves are always closely scrutinised by digital freedom advocates. Companies looking for ways to adhere to the Christchurch pledge are strongly incentivised to find methods that avoid heavy-handed censorship.

In this connection, it is interesting to consider another classifier used by Facebook and other social media companies, which sits at the very centre of their operation. This is a classifier that decides what items users see in their feed. This classifier is called a recommender system. It is trained to predict which items users are most likely to click on.

There is some evidence that recommender systems have a destabilising effect on currents of public opinion. This is because the training data for a recommender system is its users’ current clicking preferences. The problem is that recommender systems also influence these preferences, because the items they predict to be most clickable are also prioritised in users’ feeds. Their predictions are in this sense a self-fulfilling prophecy, amplifying and exaggerating any preferences detected by users.

This effect may cause recommender systems to polarise public opinion, by leading users to extremist positions. As is well-known, people have a small tendency to prefer items that are controversial, scandalous or outrageous – not because they are extremists, but just because it’s human nature to be curious about such things. This small tendency can be amplified by recommender systems. Obviously, social media systems aren’t responsible by themselves for extremism. But there’s evidence they push in this direction. A recent study from Brazil is particularly convincing, showing that Brazilian YouTube users consistently migrate from milder to more extreme political content, and that the recommender algorithm supports this migration.

Tech companies certainly don’t design their recommender systems to encourage extremism. The systems are simply designed to maximise the amount of time users spend viewing content from their own site – and thus to maximise profits from their advertisers. A tech company’s recommender system is a core part of its business model. This is why it’s so interesting to hear reports, for the first time, that social media companies are beginning to question whether their ‘business model’ can lead to extremism.

It’s conceivable that very small changes in recommender algorithms could counteract their subtle effects in tilting public opinion towards extremism. Any such changes would still be a form of ‘Internet censorship’. But they are a very light touch. There is no question of deleting material from the Internet, or preventing uploads, or blocking users’ search requests. In fact, there is no denial of user requests at all, since recommender systems already deliver content unbidden into users’ social media feeds. Recommender systems are already making choices on behalf of users. But at present, these choices are driven purely by tech companies’ drive to maximise profits. What’s being contemplated are subtle changes to these systems, that take into account the public good, alongside profits.

As well as being less heavy-handed in censorship terms, these changes also have a preventative flavour, rather than a reactive one. Rather than waiting for terrorist incidents and then responding, the proposed changes act pre-emptively, to diffuse the currents that lead to extremism. They are very appealing from this perspective too.

(Photo: New Zealand’s Jacinda Ardern and French President Emmanuel Macron). The question of how recommender algorithms could be modified to defuse extremism is an important one for debate, both within tech companies, and in the public at large. The tech companies are best placed to run experiments with different versions of the recommender system and observe their effects. (They routinely do this already.) The public should have a role in discussing what sorts of extremism should be counteracted. (There’s presumably no harm in being an extreme Star Wars fan.) The crucial thing is to begin a discussion between the tech companies and the public they claim to serve. We hope we are seeing the beginnings of this discussion in the recent announcement.

jd3NE4lym3nWkCzHAXVO.jpg

It seems to me if you want a job and you are trained well enough our Canadian Security Establisment should be a good launch point.

What role would geomagnetic and solar storms have on cyber-security? In 1859, a large solar storm hit Earth – causing the electronics of the day (such as telegraphs) to go haywire. In more recent times (Cold War era, etc.) – atmospheric conditions and solar flares have almost sparked nuclear exchanges. Are current cyber systems shielded adequately from these phenomenon? Are operators able to identify these phenomenon vs. hostile attacks?

I think perhaps one of the earliest examples of cyber-warfare was the intercepted Zimmerman telegram in 1917 – between Germany and Mexico. Are there other examples of pre-internet “cyber” (electric, digital, etc.) warfare that should be considered within these contexts?

An interesting article from Will Dunn at the New Statesman (2018). Hybrid warfare – a mix of cyber and physical warfare – is an interesting concept which I have not heard previously.

“Misinformation poses the most serious risk, says Futter, to “those ICBMs in the US and Russia that only need a few minutes to go.” Simple interference in communications – Unal points to satellites as a potential weak point – could be enough to stop the most important military decisions being made with a cool head. “Keeping weapons on high alert in a cyber environment,” says Futter, “is an enormous risk.”

Beyza Unal recalls the story – related memorably in David E. Hoffman’s Pulitzer-winning investigation of automatic nuclear systems, Dead Hand – of the most cool-headed decisions of the Cold War. The Russian lieutenant-colonel Stanislav Petrov was in charge of the Serpukhov-15 early warning station on the night in September 1983 when the Soviet Union’s satellites, sending data to the country’s most powerful supercomputer, registered a nuclear attack by the US. Despite being warned that five ICBMs were on their way to the USSR, Petrov told the decision-makers above him that the signals were a false alarm. “And he was right,” says Unal. “But a cyberattack could look like that, a spoofing of the system. Some say that humans are the weakest link in cyber issues. I say humans are both the weakest link and the strongest link. It depends on how you train them.””

and

“In the spring of 2013, a Ukrainian army officer called Yaroslav Sherstuk developed an app to speed up the targeting process of the Ukrainian army’s Soviet-era artillery weapons, using an Android phone. The app reduced the time to fire a howitzer from a few minutes to 15 seconds. Distributed on Ukrainian military forums, the app was installed by over 9,000 military personnel. By late 2014, however, a new version of the app began circulating. The alternate version contained malware known as X-Agent, a remote access toolkit known to be used by Russian military intelligence. The cyber security firm Crowdstrike, which discovered the malware, said that X-Agent gave its users “access to contacts, SMS, call logs and internet data,” as well as “gross locational data”. In the critical battles in Donetsk and Debaltseve in early 2015, the app could have shown Russian forces where Ukraine’s artillery pieces were, who the soldiers operating them were talking to, and some of what they were saying. It may be, then, that Russia’s concern – Futter describes it as “panic” – about the risks of hybrid warfare is based on the knowledge that it has been used in battle, and it works.”

With security being an important component to everyone’s organization one should take some time to learn about AI and how it can be used to protect ones corporate assets proactively.
https://www.ibm.com/security/security-intelligence/qradar?cm_mmc=Search_Bing-_-Security_Detect+threats++-+QRadar-_-WW_NA-_-qradar_e&cm_mmca1=000000MI&cm_mmca2=10000099&cm_mmca7=5245&cm_mmca8=kwd-81363975097015:loc-32&cm_mmca9=_k_{gclid}_k_&cm_mmca10={creative}&cm_mmca11=e&msclkid=9777932fcee31fc22f16ce73472dd597&utm_source=bing&utm_medium=cpc&utm_campaign=Search%7CProduct%20-%20Security%20Intelligence%20-%20QRadar%20Hub%20(qradar)%7C000000MI%7CWW%7CNA%7CEN%7CExact%7C10000099%7CNULL&utm_term=qradar&utm_content=QRadar_PR%20(Exact)

From Paul Meyer:
This is the submission by ICT4Peace, written by Paul Meyer for the UN Open-Ended Working group on Cyber Security, which will begin its work in September. (The UN Office of Disarmament Affairs has now posted it to the official site for the OEWG: https://www.un.org/disarmament/open-ended-working-group/ .)
This is the submission itself:
https://unoda-web.s3.amazonaws.com/wp-content/uploads/2019/08/ICT4PeaceBrief-OEWG-Aug42019.pdf

I had only learned recently of the CSA(Canadian Security Agency) recently as my education in Information security demanded it. I did search on it and realized the agency’s name might have been miscommunicated or misinterpreted by me…and it was actually the CSE(Communications Security Establishment which I found the website for.

It has a very interesting site (https://www.cse-cst.gc.ca/en/careers-carrieres) which I briefly looked over. The gist of it all is I am happy to know we have such an agency to watch over our national boundaries and protect us from Cyber threats abroad from Russia and China and even some of our friendly neighbors whoever they may be. So many conflicting technical standards produce wide gaping holes in our technical information communication infrastructures not to mention software bugs and malicious virus activity. The average computer user is in a difficult position and has to make use of available protection software to keep themselves safe. That requires an awareness of what products are available and learning how they are used. Products like AVAST, AVGand McAfee are offering now not just antivirus but tool suites to cope with potential computer intrusions. And it seems like new tools are rolled out quickly and I find myself doing searches on browsers that have high security …like epic, brand and the like that don’t track my information. Connection through vpn’s seems to be encouraged but all these things if free usually cost the price of sales pitches and repeated upgrade offers. Choose your tools wisely and guard your IT footprint.