Whiteboards line the walls of this lab deep inside the Defense Intelligence Agency's Washington headquarters, covered from floor to ceiling with hand-scrawled computer code and technical notes. One item stands out at the center of the room's rear wall:

"Attention should also be given to opinion, comments, and jokes of common people."

This axiom, from the original al-Qaida training manual, was a reminder from the terror network's leadership to its foot soldiers that even the most benign communications can provide invaluable lessons about an enemy. It is posted next to another quotation, this one from DIA Director Vincent Stewart, that says 90 percent of intelligence is publicly available. The expressions echo a key theme guiding the DIA's work in helping America fight its modern wars, but they also serve as a harsh reminder of one of the greatest limitations facing the shadowy agency.

Information is everywhere. Whether it's collected deliberately or incidentally, personally or digitally, agents have access to an ever-growing cache of data from sources, surveillance and social media that easily overwhelms their limited ability to sift, sort and organize information into intelligence. And without intelligence, the next plan to attack a U.S. city might be successful, the next ambush of American troops could be fatal.

"There's so much data out there now," says Robert Dixon Jr., a special adviser for programs and transition within the DIA's innovation office. "Everything is about information. You need to be able to predict what your adversaries are doing next."

As a result, one of the central focuses of the DIA's intelligence efforts is swiftly advancing technology that allows computers to think for themselves – known as "machine learning," artificial intelligence or simply AI – to recognize trends, patterns or associations in so-called "big data" and help ease the burden on analysts who have infinite tweets to scrub for potential extremist plots and countless hours of drone footage to pore over.

In some ways, the U.S. government is behind its adversaries in the use of artificial intelligence, experts say. The authoritarian regimes in China and Russia, for example, depend for survival on detecting and rooting out domestic subversion and have had a free hand to apply the technology unburdened by complaints about civil rights, privacy or appropriateness.

Russia has fewer qualms than the U.S. about what kinds of instruments of war it connects to AI, including their military's development of armed, unmanned vehicles, similar to a ground-based drone – a concept the U.S. has by policy hesitated to embrace. China, meanwhile, has excelled at developing AI through a vast number of well-trained computer engineers who program these machines using data unrivaled in volume produced by 1.4 billion countrymen and the smartphones they carry.

The U.S., which principally sees AI as a national security tool to be employed on the battlefield or to thwart terror attacks, trails both countries. And with AI also emerging as a crucial tool for recognizing and deflecting cyberattacks, which now occur at a rate too fast for any human to anticipate and foil, the shortcomings are even more unsettling.

"Industry is behind the curve right now with cyber defense not keeping pace with offense," says Matt Devost, managing director at Accenture Security and frequent adviser to the government on cyber security issues. "We have a shortage of human capital, so we don't have enough skilled people to engage in the expanding cyber defense mission."

"That means we have to augment the folks we have with AI and machine learning to get more value out of the human layer."

Further hobbling U.S. war planners is a tradition hardened over decades of spending defense dollars in a slow and cumbersome manner that allows for the careful development of new projects, like an aircraft carrier, designed to last for decades. That model hasn't been quick enough to keep up with the rate at which Islamic extremists use social media or that China or Russia have invested in their militaries in recent years.

So the DIA, through its new Innovation Hub office, is trying out a new way to solicit ideas from AI experts. The assembled technicians and analysts in the lab on this day are receiving pitches from private companies showing off products they believe can solve problems the DIA has identified. It's the third session since this program began six months ago.

Participants could be awarded as much as $250,000 in seed money to develop a program or technology for DIA to evaluate for potential investment. The agency has so far doled out $2 million on six projects over the last fiscal year. The money comes from a dedicated DIA budget, though it also partners with the Defense Innovation Unit Experimental, or DIUx, to fund some projects.

The "pitch days" are a part of a broader Defense Department effort to invest in more nimble warfighting technology, borrowing from a recent trend that produced offices like DIUx, accelerated under former Defense Secretary Ash Carter to find modern tools quickly for current wars.

"The government in the past has been very bureaucratic, focused on structure, rigor, and at times averse to change," Dixon says. "This will streamline the process and become more efficient."

And the kinds of technologies the DIA is currently considering funding are straight out of spy movies. One company that pitched last week actually referenced the Jason Bourne film series to show how it could track a particular face – in this case actor Matt Damon's – across a series of disparate video clips. A program like that could be used automatically to identify a target across multiple closed-circuit surveillance cameras, as those thriller movies commonly depict.

Daniel Osher, the president of Integral Mind, says he has developed a tool that can help explain cultural nuances like the role religion plays in society, for example, and then create a program that can, he says, accurately predict how an international incident will play out. Osher says his program could have anticipated the violent uprising that led to the 1993 "Black Hawk Down" scenario in Mogadishu, Somalia, and accurately predicted how the 2015 Joint Comprehensive Plan of Action, better known as the Iran nuclear deal, would finalize, including what outcomes each side would demand and how they would ultimately agree.

The technology is based on what he calls "mind maps," computer programs his company creates through targeted interviews with people from a particular country to help determine what informs their decisions in order to predict how they might react to certain provocations. It could be instrumental for the wars the U.S. currently fights against extremism, where winning over local support is as important as eliminating enemy combatants.

"It's this nebulous warfare, where you have to convince people that you're right. The mind is becoming the new battlespace," Osher said, shortly after finishing his presentation to the lab full of DIA analysts and technicians. "This will help them understand who to trust and how to influence them. … Here's how you get yourself into their head."

AI is so new that the U.S. doesn't yet have laws completely governing its use or answering the key question of how involved humans should be in the decision-making process, particularly in determining when a computer may decide on its own whether to carry out a lethal action.

"You can never take humans out of the loop," Dixon says. He expresses high confidence that protections are in place that would prevent, for example, a drone determining for itself that it should launch a missile, adding that the technology "is too new" to know exactly how frequently a human being belongs in the decision-making process, he says. Some critics question why the government is relying on AI technology before those questions have clear answers.

"What we're doing is for the national security of our nation. That's an incentive right there," Dixon adds.

Other experts are concerned about more seemingly benign decisions that could have even greater catastrophic effects. Futurists worry about a world in which a machine decides to undercut a foreign country's currency in an attempt to give its own military a greater edge, precipitating global financial collapse.

"It's really a serious challenge," says Paul Scharre, director of the Center for a New American Security's Technology and National Security Program and a former adviser to the secretary of defense on writing the rules for new-age technologies like drones. "It's not a reason not to use the technology, but when we think about how to employ it, we need to think about the human interface, the transparency, the vulnerability of AI, and how to train people in how to use the system."

In many ways, he says, there's no alternative to exploring how the U.S. military could better employ AI.

"We're drowning in data," Scharre says. "We have massive amounts of data that we are collecting from terrorists and other operations, and sifting through that data, it's just not possible anymore by putting human eyes and human ears on it."