source: darkreading.com

 IoT devices are rapidly populating enterprise networks but 82% of IT and line of business professionals struggle to identify all the network-connected devices within their enterprise.

According to a new Forrester study that queried 603 IT and business decision-makers across the globe with 2,500 or more employees, a key contributor to the IoT visibility problem may be confusion over who is responsible for IoT management and security.

While 50% of survey respondents - which include line of business (LoB) and IT security operations center professionals - say the SOC is responsible for default configurations and management of the devices, confusion exists when it's time to configure the devices, according to the survey, which was commissioned by ForeScout Technologies.

LoB personnel, who are responsible for operational technology (OT)  that runs specific lines of business, often find their role falling under the broad category of connected devices, or IoT. 

But when drilling down further on the question of which job titles should be responsible for IoT default configurations, 54% of LoB survey respondents feel it should be overseen by device manufacturers or LoB staff. And 45% of IT respondents agree.

As a result, according to the report, LoB users are deploying devices under the assumption all proper controls are in place without touching base with the SOC. Without SOC professionals involved in the initial setup of the IoT devices, it's difficult to get a clear view into what devices are actually riding on the network. 

"There is a lot of confusion and lack of clarity of who should own the security of IoT devices and determine what should happen," says Pedro Abreu, chief strategy officer for ForeScout. "LoBs, like plant managers, have a lot of devices that connect to the network. But they tend to think of health and safety first and not security."

 source: cnet.com

Hackers who get hold of some OnePlus phones can obtain virtually unlimited access to files and software through use of a testing tool called EngineerMode that the company evidently left on the devices.

Robert Baptiste, a freelance security researcher who goes by the name Elliot Alderson on Twitter after the "Mr. Robot" TV show character, found the tool on a OnePlus phone and tweeted his findings Monday. Researchers at security firm SecureNow helped figure out the tool's password, a step that means hackers can get unrestricted privileges on the phone as long as they have the device in their possession.

The EngineeerMode software functions as a backdoor, granting access to someone other than an authorized user. Escalating those privileges to full do-anything "root" access required a few lines of code, Baptiste said.

"It's quite severe," Baptiste said via a Twitter direct message.

OnePlus disagreed, though it said it's decided to modify EngineerTool.

"EngineerMode is a diagnostic tool mainly used for factory production line functionality testing and after sales support," the company said in a statement. Root access "is only accessible if USB debugging, which is off by default, is turned on, and any sort of root access would still require physical access to your device. While we don't see this as a major security issue, we understand that users may still have concerns and therefore we will remove the adb [Android Debug Bridgecommand-line tool] root function from EngineerMode in an upcoming OTA."

 source: securityweek.com

As security professionals, we’ve faced no shortage of challenges since the start of 2017 -- from the abundance of large-scale data breaches, ransomware attacks, and business email compromise schemes, to risks posed by Internet of Things (IoT) devices, supply chain vulnerabilities, and insider threats. These challenges have ultimately helped create numerous noteworthy shifts in how we approach not just security, but also in how we obtain, apply, and further integrate intelligence. 

Here are the top three trends that defined the evolution of intelligence in 2017:

Increased engagement in intelligence sharing

Most of us can agree that when executed correctly, intelligence sharing can be highly-beneficial -- yet historically, the extent to which many organizations have shared intelligence has been limited or non-existent. While rightful concerns over trust and privacy will likely always hinder participation, intelligence sharing has gained substantial traction as a “best practice” in 2017. The emergence of various new intelligence sharing groups has contributed to this trend, as have the substantial number of threats and resulting incidents for which external collaboration was integral to mitigation and forensics efforts.

The collaborative takedown of the WireX botnet this past August is a great example. Following the news that researchers from Akamai, Cloudflare, Flashpoint, RiskIQ, and others teamed up to neutralize a massive DDoS botnet, they were widely recognized not just for tackling WireX, but also because their joint effort epitomized the immense benefits to be gleaned from effective, trusted collaboration and intelligence sharing. 

Balancing automation with human-powered analysis

The introduction of automation has led to sweeping changes throughout the industry over the last few years. Among these changes is the emergence of the term “automated intelligence.” Typically comprising data collected by automated tools from various online sources, automated intelligence isn’t really intelligence at all -- a fact that has become even more clear in 2017. 

While traditional uses for certain types of intelligence have long consisted of technical indicators of compromise (IoCs) -- most of which are gleaned from automation, more organizations are recognizing that IoCs and other automated data are rarely actionable until contextualized and further enhanced by humans. 

 source:  sent by Rob Wiltbank, Galois

Our friends at GALOIS (thanks to Artemus FAN, Rob Wiltbank!) have been gracious in allowing us to post their take on the recent 'KRACK" (WPA2) vulnerability.  We reported the vulnerability in our last issue of "Artemus Spotlights".  Take a look at that posting here.


 

On Monday, October 16, the KRACK vulnerability to WPA2 was revealed in a paper by Mathy Vanhoef and Frank Piessens. KRACK enables a range of attacks against the protocol, resulting in a total loss of the privacy that the protocol attempts to guarantee. For more technical details on the attack, the website and the Key Reinstallation Attacks (KRA) paper are the best place to look. The paper presents the problem clearly, and you will learn about a protocol that you use constantly. Furthermore, it presents a number of compelling attacks that show exactly how big of a problem KRACK is. This post will discuss what the KRACK paper has to teach us about formal methods and cryptography standards.

It’s a little surprising that a protocol as widely used as WPA2 still harbors critical vulnerabilities. Even more surprising is that portions of the protocol have been formally verified (mathematically proved) to be secure! Why don’t these factors guarantee that the protocol is free of such critical vulnerabilities? The KRA paper raises the following concerns about standards and formal verification. These provide us with valuable insight into pitfalls to avoid as we perform and present our work:

  1. Specifications for a protocol may not be precise enough to guarantee security.
  2. Real-world implementations may not match formal specifications used in proofs.
  3. Formal proofs might lead to complacency, discouraging future audits and inspections.

At Galois, we believe strongly in the value of formal verification, so we think it’s worth examining each of these points. In doing so, we gain some insights into real-world cryptography verification.

Concern: Specifications may not be precise enough

It is impossible to test for security. Security is a property of all possible behaviors of a system, so the time to get security right is when the system is defined. KRACK is a vulnerability in the specification of the WPA2 protocol, and it is exacerbated in some cases by decisions that implementors made in the face of an ambiguous specification. Any of these decisions allow the implementations to function correctly. After all, we’ve been successfully utilizing WPA2 for a long time without noticing any significant functionality shortcomings. In the face of the KRACK vulnerability, however, these ambiguities allow for significantly more damaging attacks.

We can ensure that specifications are unambiguous by making them more formal. Often, ambiguity hides in natural language specifications in ways that are difficult to understand until the specifications are represented formally. A formal specification serves as an intermediate point between the easily ingestible natural language specifications we typically see today and the more complicated implementations.