source: Carmen Middleton, thecipherbrief.com

There has been growing discussion about the importance of open source information – both in terms of the power and potential of creating and disseminating news and narratives worldwide, whether genuine or fake, and for the pressing need to evolve the open source intelligence (OSINT) discipline.

“Devaluing OSINT has become a more significant problem as Russia and China use social media as an arena to wage disinformation operations,” wrote Dana Priest, commenting in the New Yorker about the Russian meddling in the U.S election.

Europe has been sensitized not only to the speed by which information, including disinformation, can be conveyed to its citizenry, but also to the power of such messaging to create confusion, mistrust and even a distortion of attitudes and actions.

In response to this threat, Denmark announced in July that it had begun to train its troops, designated for deployment in Estonia, in combating disinformation. And on Nov. 13, the European Commission launched a public consultation on fake news and online disinformation and set up a High-Level Expert Group representing academics, online platforms, news media and civil society organizations.

The open source landscape continues to evolve at a head-spinning pace, and this dynamic evolution is challenging, in earnest, long-held perceptions of what practitioners fondly refer to as the “’INT’ of first resort.”

“I don’t think it has had its heyday,” Jason Matheny, director of IARPA, recently told The Cipher Brief about the state of open source intelligence. “We don’t invest very much in open source intelligence compared to classified sources of intelligence as the intelligence community.”

As a former director of the Open Source Center, now the Open Source Enterprise, I cannot agree more with this statement. Over the course of its 76-year history, the U.S. government’s OSINT venture has experienced all-too-fleeting moments of high-level attention and committed investment only to fall back into longer periods of disinterest and flattened or reduced budgets.

  source: darkreading.com

 IoT devices are rapidly populating enterprise networks but 82% of IT and line of business professionals struggle to identify all the network-connected devices within their enterprise.

According to a new Forrester study that queried 603 IT and business decision-makers across the globe with 2,500 or more employees, a key contributor to the IoT visibility problem may be confusion over who is responsible for IoT management and security.

While 50% of survey respondents - which include line of business (LoB) and IT security operations center professionals - say the SOC is responsible for default configurations and management of the devices, confusion exists when it's time to configure the devices, according to the survey, which was commissioned by ForeScout Technologies.

LoB personnel, who are responsible for operational technology (OT)  that runs specific lines of business, often find their role falling under the broad category of connected devices, or IoT. 

But when drilling down further on the question of which job titles should be responsible for IoT default configurations, 54% of LoB survey respondents feel it should be overseen by device manufacturers or LoB staff. And 45% of IT respondents agree.

As a result, according to the report, LoB users are deploying devices under the assumption all proper controls are in place without touching base with the SOC. Without SOC professionals involved in the initial setup of the IoT devices, it's difficult to get a clear view into what devices are actually riding on the network. 

"There is a lot of confusion and lack of clarity of who should own the security of IoT devices and determine what should happen," says Pedro Abreu, chief strategy officer for ForeScout. "LoBs, like plant managers, have a lot of devices that connect to the network. But they tend to think of health and safety first and not security."

 source: cnet.com

Hackers who get hold of some OnePlus phones can obtain virtually unlimited access to files and software through use of a testing tool called EngineerMode that the company evidently left on the devices.

Robert Baptiste, a freelance security researcher who goes by the name Elliot Alderson on Twitter after the "Mr. Robot" TV show character, found the tool on a OnePlus phone and tweeted his findings Monday. Researchers at security firm SecureNow helped figure out the tool's password, a step that means hackers can get unrestricted privileges on the phone as long as they have the device in their possession.

The EngineeerMode software functions as a backdoor, granting access to someone other than an authorized user. Escalating those privileges to full do-anything "root" access required a few lines of code, Baptiste said.

"It's quite severe," Baptiste said via a Twitter direct message.

OnePlus disagreed, though it said it's decided to modify EngineerTool.

"EngineerMode is a diagnostic tool mainly used for factory production line functionality testing and after sales support," the company said in a statement. Root access "is only accessible if USB debugging, which is off by default, is turned on, and any sort of root access would still require physical access to your device. While we don't see this as a major security issue, we understand that users may still have concerns and therefore we will remove the adb [Android Debug Bridgecommand-line tool] root function from EngineerMode in an upcoming OTA."

 source: securityweek.com

As security professionals, we’ve faced no shortage of challenges since the start of 2017 -- from the abundance of large-scale data breaches, ransomware attacks, and business email compromise schemes, to risks posed by Internet of Things (IoT) devices, supply chain vulnerabilities, and insider threats. These challenges have ultimately helped create numerous noteworthy shifts in how we approach not just security, but also in how we obtain, apply, and further integrate intelligence. 

Here are the top three trends that defined the evolution of intelligence in 2017:

Increased engagement in intelligence sharing

Most of us can agree that when executed correctly, intelligence sharing can be highly-beneficial -- yet historically, the extent to which many organizations have shared intelligence has been limited or non-existent. While rightful concerns over trust and privacy will likely always hinder participation, intelligence sharing has gained substantial traction as a “best practice” in 2017. The emergence of various new intelligence sharing groups has contributed to this trend, as have the substantial number of threats and resulting incidents for which external collaboration was integral to mitigation and forensics efforts.

The collaborative takedown of the WireX botnet this past August is a great example. Following the news that researchers from Akamai, Cloudflare, Flashpoint, RiskIQ, and others teamed up to neutralize a massive DDoS botnet, they were widely recognized not just for tackling WireX, but also because their joint effort epitomized the immense benefits to be gleaned from effective, trusted collaboration and intelligence sharing. 

Balancing automation with human-powered analysis

The introduction of automation has led to sweeping changes throughout the industry over the last few years. Among these changes is the emergence of the term “automated intelligence.” Typically comprising data collected by automated tools from various online sources, automated intelligence isn’t really intelligence at all -- a fact that has become even more clear in 2017. 

While traditional uses for certain types of intelligence have long consisted of technical indicators of compromise (IoCs) -- most of which are gleaned from automation, more organizations are recognizing that IoCs and other automated data are rarely actionable until contextualized and further enhanced by humans.