3rd Annual JHU Senior Executive Cyber Security Conference.
Don Good, FBI veteran and current Director, Global Legal Technology Solutions, at Navigant, gives a view from the trenches. CyberWire
N2K logoSep 21, 2016

3rd Annual JHU Senior Executive Cyber Security Conference.

"Navigating today's cyber security terrain."

This conference offered the intelligent leader responsible for an enterprise's cyber security useful insights into negotiating the current landscape of threats and defensive measures—executives and entrepreneurs concerned about the realities of what they face in cyberspace received a lot of good advice. Much of it concerned error, and how to avoid it. Some of the advice was encouraging and some of it was dismaying; some of it was expected, but much was surprising.

The conference included breakout sessions on compliance and security issues for the healthcare and financial services sector as well as a discussion of the NIST Framework. Our account below describes the general sessions.

Why "Iloveyou88" isn't a particularly strong password.

Some surprising advice was delivered in the morning keynote by Lorrie Cranor, Chief Technologist at the Federal Trade Commission. Her topic was "usable security and public policy," and she was particularly interested in describing the ways in which the assumptions people make about security are plausible and well-intentioned, but wayward.

She opened with an example from FTC headquarters. The Commission has installed electronic locks on its office doors, which seems quite up-to-date and doubtless more secure than the old-fashioned key-and-tumbler kind. The keys are wireless, and open doors when you make an emoticon on the key go from a scowl to a smile. But it's proven difficult to please the emoticon, and the result, of course, was that people generally stopped locking their office doors. And the key, by the way, itself isn't secure.

So even the Federal agency that's been most aggressive in asserting its regulatory equities over cyber security has some significant blind spots in technology, policy, and advice to the public. The FTC isn't alone in this. Improvement depends upon arriving at a sound empirical understanding of which security measures work and which are futile (or in some cases even harmful).

Common wisdom and common advice, hold that you should use "strong" passwords and change them often. The theory behind this conventional wisdom is plausible enough: changing passwords should lock out attackers who've obtained compromised credentials. But Cranor pointed out that following this advice is often unhelpful in practice. Most people, when they change passwords for security, rely on transformations of older passwords (people do, after all, have to remember the password). But a study at the University of North Carolina Chapel Hill found that transformation rules can be readily applied to crack new passwords—they were able in tests to crack 17% of the accounts tested in less than five guesses. And user annoyance rises with the frequency of password changes, and annoyance breeds weak passwords.

There's growing recognition that mandating frequent password changes isn't a particularly good idea. The UK's Communications-Electronics Security Group (CESG) now agrees with this, and Microsoft now advises using strong, unique passwords, and is silent on frequent changes.

Password strength is also easy to misread, and most people guess the relative security of passwords incorrectly. Cranor outlined some common misconceptions. Keyboard patterns aren't secure (even if "they're diagonal"). In fact, keyboard patterns are easily guessed, and inculcate a false sense of randomness and strength. Nor is adding an exclamation point particularly helpful. And adding numerals to an alphabetical password isn't particularly helpful either. If "Password" remains one of the commonest (and sorriest) passwords, "Password1," "Password2," and so on represent no improvement. And the same of course holds true for other sad specimens like "Letmein," "Iloveyou," and even the ever-popular "Ninja."

In Cranor's opinion, for whatever risks they may carry, password managers are far better than the likely alternatives, which are weak, reused passwords.

She closed with a an account of the consequences of becoming a victim of identity fraud (which she herself experienced). It's difficult to recover, and hard to track down useful information to help you with that recovery. She recommends the site identitytheft.gov, and she closed with an invitation to attend upcoming FTC events touching on cyber security. (Information may be found at https://www.ftc.gov.)

Risk awareness: views from the security industry.

Don Good (Director, Global Legal Technology Solutions, Navigant) and Bob Olsen (CEO of COMPASS Cyber Security) offered perspective on the threat and what to do about it.

A twenty-year FBI veteran, Good described the familiar threat taxonomy and how the threat actors it captures manifest themselves "on the front line." The threats, distinguished by motivation and severity of consequence, include hacktivists, criminals, insiders, nation-states, cyber terror, cyberwar. Good thinks that the criminals are probably most to be feared, currently, and he noted in particular the sophisticated reconnaissance criminals now undertake.

He strongly recommended that businesses engage with law enforcement early in an attack, and indeed that they get to know law enforcement even before becoming a victim. Various information-sharing mechanisms are valuable—he particularly recommended InfraGard, NCFTA, ISACs, and ISAOs.

COMPASS Cyber Security's Olsen described the current threat's increased sophistication and the wide net it casts for its victims. Businesses also face legal and reputational risk from those who are injured by successful cyber attacks. Data owners increasingly unhappy with exposure, and employees are restive when their personal information is compromised in a corporate breach. On that latter point he noted the ongoing class action suit against Seagate by its employees.

Olsen reviewed the currently common criminal techniques: phishing, vishing (phishing by voice, in a phone call), business email compromise, ransomware, and malicious insider activity. Phishing and vishing are now commonly being used together against higher-value targets—organized criminals hire call centers to support them. Server attacks are down, but endpoint attacks are up, as is social engineering. And, disturbingly, detection by victims is down even as detection rates by law enforcement are up.

Legal risk isn't confined to civil suits by injured parties. Regulatory bodies, especially the Department of Health and Human Services Office of Civil Rights, the Federal Trade Commission, the Securities and Exchange Commission, and such industry bodies as the Financial Industry Regulatory Authority (FINRA) are becoming increasingly aggressive in their enforcement actions. The Office of Civil Rights in particular has announced its intention to focus on business associates and small enterprises. It's clear, Olsen said, that privacy issues and regulations increasingly dominate the risks businesses need to manage.

We were able to follow up with Olsen after his presentation. It seemed that integrating security products is often more challenging than those who buy them expect, and Olsen confirmed that this is so. He often sees an "unhealthy trust" arise in enterprises who buy security products. The reality is that integration is usually far more challenging than they expect. One finds a great deal of duplication (say, two or three anti-virus products) and this often results in what Olsen called "Frankenstein networks." Vendors could help their customers by devoting more effort to integration and implementation.

Olsen believes that quantifying cyber risk is still an unmet challenge. He would like to see greater recognition that cyber risk should be fit into a traditional rigorous risk management framework. The insurance industry may be moving in the right direction here. Choosing the right product is still a challenge if you're buying cyber insurance, but the market is maturing. Olsen has seen insurers become far more selective in the risks they'll willing to underwrite.

Michael Colford (Assistant Vice President, Cyber Liability at Aspen Insurance) in his presentation suggested some of the things insurers are looking for in their clients. He strongly advised planning for a breach. Having a plan and exercising it in advance will not only hold down panic, but will also mitigate an incident's downstream impact. Particular attention should be paid to determining who should be involved in breach response, from boards to individual employees and third parties. Their responsibilities should be clear, and clearly communicated.

Cloud security: how to approach the cloud.

A panel on cloud security offered advice to organizations using cloud services. Michele Cohen (Miles and Stockbridge), John Holmblad (University of Maryland University College/George Washington University) and Jason Ward (Breakthru Beverage Corporation) shared their insights and experiences.

Ward's advice was particularly interesting in that his company's business involves holding little personally identifiable information and not much intellectual property, either—their challenges are storing data and keeping it accessible. Breakthru entered the cloud under a software-as-a-service model appropriate to its dispersed, highly mobile workforce. They've concluded that it's important to issue a request for proposals (RFP) and solicit bids from as many vendors as possible. The use of an RFP enables them to compare "apples to apples," which isn't always easy to do without the specifications an RFP lays down. It's important to vet providers very carefully—How robust is their system? What are their policies? How granular can they get?—and an RFP can be structured to support such vetting.

Cohen described her role as counselor is to work with the client to allocate risk responsibility in contracts. "Today, the cloud is pretty much a given." Customers expect mobile functionality, and the central issue is control over data. From a comfort, marketing, PR perspective, Cohen thought there's some satisfaction in working with a leading provider of cloud services. It's also perhaps worth being suspicious of a small provider who promises everything--you know they can't deliver. To her client, she asks, what are you expecting the service to do for you, and what data will that service handle? Finally, "since all relationships end," she wants to help her client shape the end of their relationship with a cloud provider. Will they get their data back? What kind of cooperation can they expect in audit and discovery?

Since his experience goes back to ARPANET, Holmblad reminded everyone that ARPANET was in many respects the original cloud, and that its goal was informaiton sharing among scientists. Our current challenge, he said, is much larger—how to run an economy securely in the cloud. When you're considering whether to migrate to the cloud, ask yourself if you can do a better job of securing your information than the service provider can. "You'll find the providers' protections are world-class. How do yours stack up against them?" So the question isn't that you can relax your security in the cloud, but rather that you should approach the cloud provider as offering additional security that you can take advantage of.

Cyber threats: today's situation, tomorrow's threats, and the nature of human error.

To understand the situation today, it's useful to walk through a recent example of a breach, and Michael Misumi did that for the audience. (Misumi is Chief Information Officer and Head of the Information Technology Service Department at the Applied Physics Laboratory (APL) at the Johns Hopkins University.) He described two breaches and their remediation. The APL's assessment found that social engineering and client-based attacks presented the highest risk. They also had to confront the familiar difficulty of cleaning a complex, mission-critical network—far harder than one would like to assume.

For a look at the future, the conference turned to Avi Rubin, Professor of Computer Science and Technical Director of the Information Security Institute at the Johns Hopkins University. He began, as promised, with a look back at the history of cyber security. One important lesson he drew from that history was the difficulty of getting decision-makers to take seriously potential threats before those threats are manifested in the wild.

From this he pointed out some emerging kinds of threats we aren't generally aware of. Did you know, for example, that simply reading data in DRAM can alter adjacent records? Or that ransomware—now familiar to everyone—has a big future beyond simply encrypting files? That is, why not manipulate data instead? Suppose one were able to establish persistence in a hospital's network and systematically alter patient medical records for a few months. Then one could approach the hospital, point out that their data were provably corrupt, and offer to restore the integrity of their data, for a fee. And what the criminals asked for needed be called "ransom." Why not call it a "subscription?"

There is, of course, an irreducible element of human error in security. Human factors expert Julie Marble of the Applied Physics Laboratory (APL) at the Johns Hopkins University defined human error as involving failure to perform a prescribed act within specified norms where such failure could have a negative effect. We often misunderstand, Marble pointed out, the context and processes within which people make security errors, and any effort to reduce human error must take into account context and process. And she closed with an appropriate bit of big-picture advice that should temper our expectations of risk reduction, as important as such reduction is: it's probably impossible to engineer the human out of a system whose purpose it is to allow human beings to share information.