source: the

Not a day goes by that Americans don’t wake to the news of a new cyber intrusion affecting private sector or government networks, whether major cyber hacks at Target or Equifax, sloppy data breaches like those Verizon experienced, or nation-state-sponsored efforts like the WannaCry virus. Companies and institutions are pouring more time, attention and resources into computer network security, because the networks are so critical. But why lock the front door when you leave the windows wide open? Bad actors can launch attacks and gain access to critical information through other routes too.

As seen with the widely reported interference in democratic elections, attacks can be launched cheaply and relatively easily by criminals, nation-states, terrorists, disgruntled employees, or even good people with sloppy habits who accidentally expose critical data. As a former Secretary of the Air Force, I can tell you that Air Force networks are attacked—and these attacks are repelled—thousands of times per week.

This is why, in addition to network security, the Air Force is focusing more resources on operational security. The private sector should follow suit.

Operational security means protecting assets that depend on lines of code in software to conduct missions, whatever those missions might be. This could involve anything from protecting advanced fighter aircraft to the HVAC systems on a base where critical operations take place. It could include the MRI machine in a hospital entrusted with sensitive patient data. Our critical infrastructure—the electrical grid and transportation systems, for example—can be equally vulnerable from an operational perspective, if network security is the sole focus.

The solution is to broaden the national cybersecurity approach to include “endpoint security” for vital operational systems. Stated another way, we need to wrap firewalls around certain vital machines to ensure that an intrusion in one area will not allow for a more extensive penetration to the broader network.

Consider a fictional scenario in which a U.S. nuclear facility is breached. A terrorist group launches a “cyber-physical attack” by unleashing a virus that penetrates sensors that monitor cooling. The malware is introduced when an infected flash drive is inserted into a network laptop during maintenance to adjust, for example, process sequences. The laptop is presumed to be safe because it’s not connected to the internet—it is “air gapped.” The virus targets specific endpoints that manage fail-safe functions such as temperature maximums. The virus tells temperature sensors to stop working. At the same time, it tells other mini computers to escalate heat-generating functions. The result could be catastrophic overheating and, ultimately, a meltdown.

Such an attack, and many others we haven’t thought of yet, are preventable when control systems are more deeply protected. Each device and sensor comprising the network can and should be shielded from malware that gets through the figurative front door.

Here’s the bottom line: we need a holistic approach to cybersecurity going forward, including network and endpoint security. Focusing on one but not the other could result in crippling losses in today’s machine-to-machine marketplace.

The government and the private sector need to keep working to lock the front door, and start doing a better job of bolting the windows.


In the near future – in all likelihood, later this month – at least Windows and Linux will get security updates that change the way those operating systems manage memory on Intel processors.

There’s a lot of interest, excitement even, about these changes: they work at a very low level and are likely to affect performance.

The slowdown will depend on many factors, but one report suggests that database servers running on affected hardware might suffer a performance hit around 20%.

“Affected hardware” seems to include most Intel CPUs released in recent years; AMD processors have different internals and are affected, but not quite as broadly.

So, what’s going on here?

On Linux, the forthcoming patches are known colloquially as KPTI, short for Kernel Page Table Isolation, though they have jokingly been referred to along the way as both KAISER and F**CKWIT.

The latter is short for Forcefully Unmap Complete Kernel With Interrupt Trampolines; the former for Kernel Address Isolation to have Side-channels Efficiently Removed.

Here’s an explanation.

Inside most modern operating systems, you’ll find a privileged core, known as the kernel, that manages everything else: it starts and stops user programs; it enforces security settings; it manages memory so that one program can’t clobber another; it controls access to the underlying hardware such as USB drives and network cards; it rules and regulates the roost.

Everything else – what we glibly called “user programs” above – runs in what’s called userland, where programs can interact with each other, but only by agreement.

If one program could casually read (or, worse still, modify) any other program’s data, or interfere with its operation, that would be a serious security problem; it would be even worse if a userland program could get access to the kernel’s data, because that would interfere with the security and integrity of the entire computer.

One job of the kernel, therefore, is to keep userland and the kernel carefully apart, so that userland programs can’t take over from the kernel itself and subvert security, for example by launching malware, stealing data, snooping on network traffic and messing with the hardware.

The CPU itself provides hardware support for this sort of separation: the x86 and x64 processors provide what are known as privilege levels, implemented and enforced by the chip itself, that can be used to segregate the kernel from the user programs it launches.

Intel calls these privilege levels rings, of which there are four; most operating systems use two of them: Ring 0 (most privileged) for the kernel, and Ring 3 (least privileged) for userland.


WHEN HUMANS ARE finally ready to relocate civilization to Mars, they won’t be able to do it alone. They’ll need trusted specialists with encyclopedic knowledge, composure under pressure, and extreme endurance—droids like Justin. Built by the German space agency DLR, such humanoid bots are being groomed to build the first martian habitat for humans. Engineers have been refining Justin’s physical abilities for a decade; the mech can handle tools, shoot and upload photos, catch flying objects, and navigate obstacles. Now, thanks to new AI upgrades, Justin can think for itself. Unlike most robots, which have to be programmed in advance and given explicit instructions for nearly every movement, this bot can autonomously perform complex tasks—even those it hasn’t been programmed to do—on a planet’s surface while being supervised by astronauts in orbit. Object recognition software and computer vision let Justin survey its environment and undertake jobs such as cleaning and maintaining machinery, inspecting equipment, and carrying objects. In a recent test, Justin fixed a faulty solar panel in a Munich lab in minutes, directed via tablet by an astronaut aboard the International Space Station. One small chore for Justin, one giant leap for future humankind.


Who: Justin—it was completed “just in” time for a 2006 trade show

Height: 6' 3''

Weight: 440 pounds

Lifting Strength: 31 pounds in each arm

Unexpected Talent: Making tea and coffee

Eyes: Hi-def cameras and sensors embedded in the head generate a 3-D view of Justin’s ­surroundings.

Probe: An R2D2-style data interface means Justin can sync up to computers and data collection stations. Eventually it will be able to charge its own battery by ­plugging into a solar power unit.

Hands: Eight jointed fingers allow the bot to deftly handle tools.

Base: Justin’s protocols are stored onboard, so it can complete tasks and save data even if communication links fail.

Wheels: DLR tested Justin’s future all-terrain robot wheels atop an active volcano.


IF THE WEB were an amusement park attraction, you’d have to be 10 feet tall to ride—it's terrifying enough for adults and a funhouse of horrors for kids, from inappropriate content to unkind comment sections to outright predators.


And yet! The internet also affords opportunities to learn, to socialize, to create. Besides, at this point trying to keep your kids off of it entirely would be like keeping them away from electricity or indoor plumbing. They’re going to get online. Your job is to help them make good choices when they get there.

Yes, there are parent-friendly routers you can buy, and software you can use, to limit your child’s access to the internet. But it's more important to create a mental framework that helps keep your kids safe—and teaches them to protect themselves.

Adjust as Needed

One reason it’s so hard to offer concrete rules governing kids and the internet is that no two kids are alike. It’s like keeping kids safe after homecoming. Some might just need a curfew, others a breathalyzer.

Think of sending your kids out into the internet, then, in the same way you think about sending them out into the world. Different age groups require different amounts of oversight; even within a specific age, different kids have different inclinations, and with them different needs.

As muddied a picture as it sounds, at least some legal guidelines exist. The Children’s Online Privacy Protection Rule, established in 1998, creates safeguards like keeping children off of social media under the age of 13. (Facebook has recently attempted to skirt that with a version of Messenger aimed at kids 6 and older.) Even so, millions of kids under 13 have found their way onto Facebook anyway, often with parental consent. Don't give in!