Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

News & Supplemental  

Droning On: Interesting Timing in the Middle East

Oswald: Both JFK and the U-2 Incident?!

COMMENT: The appar­ent cap­ture and down­ing by Iran of a U.S. drone air­craft raises a num­ber of inter­est­ing questions.

If the account pre­sented in “Debka” can be believed, the pos­si­bil­ity of some sort of “inside job” is one to be care­fully con­sid­ered. (“Debka” is an intel­li­gence newslet­ter spe­cial­iz­ing in Mid­dle East­ern and Israeli national secu­rity matters.)

With the Ptech/Islamist/Muslim Broth­er­hood links to the GOP and the Under­ground Reich un-interdicted, the pos­si­bil­ity of GOP sab­o­tage to pre­vent suc­cess­ful mil­i­tary action against Iran must be con­sid­ered. (Mitt Rom­ney said that if Obama is re-elected, Iran will get the bomb. Although that may very well be a done deal, it is inter­est­ing to note the prox­im­ity of the drone cap­ture to the Romney’s remarks.)

  • Was Ptech tech­nol­ogy involved here?
  • Might U.S. and/or Israeli national secu­rity inter­ests have helped engi­neer this to fore­stall an Israeli attack on Iran, seen by many as a blue­print for wider dev­as­ta­tion and disaster?
  • Might Islamists and/or Under­ground Reich per­son­nel have been involved?
  • Might GOP per­son­nel have been involved in bring­ing down the drone to cause embar­rass­ment and or dis­func­tion­al­ity to the Obama admin­is­tra­tion, not unlike the U-2 inci­dent and its effect on the Eisen­hower sum­mit with Soviet pre­mier Khr­uschev or the Octo­ber Sur­prise and its effect on the re-election cam­paign of Jimmy Carter?
  • Might right-wing Israeli ele­ments have been involved, in order to gen­er­ate pres­sure for an attack on Iran by Israel? Note that Debka has a strong bias toward the Israeli right-wing.

“Iran Exhibits U.S. Drone Undam­aged. U.S. and Israel Shocked”; DEBKAfile; 12/8/2011.

EXCERPT: Iran exhib­ited the top-secret US stealth drone RQ-170 Sen­tinel cap­tured on Sun­day, Dec. 4. Its almost per­fect con­di­tion con­firmed Tehran’s claim that the UAV was downed by a cyber attack, mean­ing it was not shot down but brought in undam­aged by an elec­tronic war­fare ambush.

This is a major deba­cle for the stealth tech­nol­ogy the US uses in its war­planes and the drone tech­nol­ogy devel­oped by the US and Israel.
The state of the lost UAV refutes the US mil­i­tary con­tention that the Sentinel’s sys­tems mal­func­tioned. If this had hap­pened, it would have crashed and either been wrecked or damaged.
The con­di­tion of the RQ-170 intact obliges the US and Israel to make major changes in plans for a poten­tial strike against Iran’s nuclear program.
Ear­lier Thurs­day, Debkafile reported:
The Obama administration’s deci­sion after inter­nal debate not to send US com­mando or air units into Iran to retrieve or destroy the secret RQ-170 stealth drone which fell into Iran­ian hands has strength­ened the hands of the Israeli fac­tion which argues the case for strik­ing Iran’s nuclear instal­la­tions with­out wait­ing for the Amer­i­cans to make their move.
Senior Israeli diplo­matic and secu­rity offi­cials who fol­lowed the dis­cus­sion in Wash­ing­ton con­cluded that, by fail­ing to act, the admin­is­tra­tion has left Iran not only with the secrets of the Sentinel’s stealth coat­ing, its sen­sors and cam­eras, but also with the data stored in its com­puter cells on tar­gets marked out by the US and/or Israeli for attack.
Debkafile’s mil­i­tary sources say that this knowl­edge com­pels the US and Israel to revise their plans of attack for abort­ing the Iran­ian nuclear program.
Like every clan­des­tine weapons sys­tem, the RQ-170 had a self-destruct mech­a­nism to pre­vent its secrets spilling out to the enemy in the event of a crash or cap­ture. This did not hap­pen. Tehran was able to claim the spy drone was only slightly dam­aged when they downed it.
The NATO spokesman claimed con­trol was lost of the US UAV and it went miss­ing, a com­mon occur­rence for these unmanned aircraft.
The enig­mas sur­round­ing its cap­ture con­tinue to pile up. How did Iran know the drone had entered its air­space? How was it caused to land? Most of all, why did the craft’s self-destruct mech­a­nism which is pro­grammed to acti­vate auto­mat­i­cally fail to work? And if it mal­func­tioned, why was it not acti­vated by remote control? . . . .


10 comments for “Droning On: Interesting Timing in the Middle East”

  1. Here is a, per­haps, telling quote from Mr Ahmadinejad.

    He said: ‘The Amer­i­cans have per­haps decided to give us this spy plane. We now have con­trol of this plane.’

    Posted by grumpusrex | December 16, 2011, 8:25 am
  2. Based on this procla­ma­tion from the Pak­istani mil­i­tary the next downed drone might be in Pak­istan:

    Pak­istan says U.S. drones in its air space will be shot down
    By NBC News, msnbc.com staff and news ser­vice reports

    Updated at 8 p.m. EST

    ISLAMABAD — Pak­istan will shoot down any U.S. drone that intrudes its air space per new direc­tives, a senior Pak­istani offi­cial told NBC News on Saturday.

    Accord­ing to the new Pak­istani defense pol­icy, “Any object enter­ing into our air space, includ­ing U.S. drones, will be treated as hos­tile and be shot down,” a senior Pak­istani mil­i­tary offi­cial told NBC News.

    The pol­icy change comes just weeks after a deadly NATO attack on Pak­istani mil­i­tary check­points acci­den­tally killed 24 Pak­istani sol­diers, prompt­ing Pak­istani offi­cials to order all U.S. per­son­nel out of a remote air­field in Pakistan.

    Pak­istan told the U.S. to vacate Shamsi Air Base by Decem­ber 11.


    Pak­istani author­i­ties started threat­en­ing U.S. per­son­nel with evic­tion from the Shamsi base in the wake of the raid last May in which U.S. com­man­dos killed Osama bin Laden at his hide-out near Islam­abad with­out noti­fy­ing Pak­istani offi­cials in advance.

    And it might be NATO oper­ated:

    DECEMBER 15, 2011

    U.S. Pur­sues Sale of Armed Drones


    The Obama admin­is­tra­tion has been qui­etly push­ing to sell armed drones to key allies, but it has run into resis­tance from U.S. law­mak­ers con­cerned about the pro­lif­er­a­tion of tech­nol­ogy and know-how.

    The Pen­ta­gon wants more North Atlantic Treaty Orga­ni­za­tion mem­bers to have such pilot­less air­craft to ease the bur­den on the U.S. in Afghanistan and in future con­flicts like the alliance’s air cam­paign in Libya this year.

    Admin­is­tra­tion offi­cials recently began infor­mal con­sul­ta­tions with law­mak­ers about prospec­tive sales of armed drones and weapons sys­tems to NATO mem­bers Italy and Turkey, while sev­eral U.S. allies in the Per­sian Gulf have been press­ing Wash­ing­ton to autho­rize drone sales, offi­cials said.

    The Pen­ta­gon also wants to sell Turkey up to two armed drones and four sur­veil­lance drones, accord­ing to offi­cials briefed on the dis­cus­sions. But they say the Turkey deal is unlikely to move for­ward if law­mak­ers refuse to sign off on the Ital­ian sale.


    Turkey wants to use the drones against the out­lawed Kur­dis­tan Work­ers’ Party, or PKK. The Pen­ta­gon has been shar­ing with Turkey real-time intel­li­gence from U.S. drone mis­sions in north­ern Iraq and along the bor­der, help­ing Turkey’s air force pin­point PKK posi­tions for strikes, U.S. offi­cials say. Turkey wants to do the mis­sions itself, a shift sup­ported by the Pentagon.


    Sev­eral of America’s allies in the Per­sian Gulf region are also push­ing to pur­chase armed drones. U.S. offi­cials say such requests could also prove con­tro­ver­sial in Con­gress because of law­mak­ers’ con­cerns about the poten­tial impact on Israel’s mil­i­tary edge in the region.

    Law­mak­ers have told the admin­is­tra­tion they are con­cerned U.S. exports of armed drones could make it harder for Wash­ing­ton to make the case to Israel, a pio­neer in drone devel­op­ment, to limit its own for­eign sales of drones that could rival the U.S.‘s. Israel already sells drones to India and other countries.


    Posted by Pterrafractyl | December 16, 2011, 8:31 pm
  3. You don’t say...:

    July 19th, 2012
    06:40 PM ET
    Drones vul­ner­a­ble to being hacked, Con­gress told

    By Todd Sperry

    It wouldn’t take much effort to hijack a drone over U.S. air­space and use it to com­mit a crime or act of ter­ror­ism, an aero­space engi­neer­ing expert told a House sub­com­mit­tee Wednesday.

    Todd Humphreys showed mem­bers of a House home­land secu­rity sub­com­mit­tee how his research team was able to com­man­deer an $80,000 drone using store-bought global posi­tion­ing sys­tem (GPS) tech­nol­ogy.

    Drones, includ­ing ones used by police agen­cies, are vul­ner­a­ble to hack­ing because they use unen­crypted GPS infor­ma­tion for nav­i­ga­tion.

    “If you can con­vinc­ingly fake a GPS sig­nal, you can con­vince an (unmanned aer­ial vehi­cle) into track­ing your sig­nal instead of the authen­tic one, and at that point you can con­trol the UAV,” said Humphreys, an assis­tant pro­fes­sor spe­cial­iz­ing in orbital mechan­ics at the Uni­ver­sity of Texas.

    Humphreys said hack­ing and spoof­ing to take con­trol of a drone can be done from miles away.

    The U.S. mil­i­tary uses encrypted GPS on drones fly­ing in war zones such as Afghanistan. To use sim­i­lar tech­nol­ogy on all drones would increase costs dra­mat­i­cally, accord­ing to Gov­ern­ment Account­abil­ity Office (GAO) offi­cials who attended Thursday’s hear­ing on Capi­tol Hill.

    GAO offi­cials have sug­gested that the Home­land Secu­rity Depart­ment and the Fed­eral Avi­a­tion Admin­is­tra­tion col­lab­o­rate in reg­u­lat­ing drones. But the Depart­ment of Home­land Secu­rity has, up to this point, been unwill­ing to accept a role in reg­u­lat­ing drones, accord­ing to Rep. Michael McCaul, R-Texas.

    DHS offi­cials were repeat­edly chas­tised by com­mit­tee mem­bers for fail­ing to show up for Thursday’s hearing.


    Drones are cur­rently a growth indus­try in the avi­a­tion sec­tor, with scores of new com­pa­nies com­pet­ing for a slice of the mar­ket. And if they can clear hur­dles that cur­rently limit their deploy­ment in friendly air­space, pilot­less planes of all shapes will be tak­ing to the air on mis­sions to watch over us.

    Just what sort of recon­nais­sance the drones will do and how such uses might infringe on civil lib­er­ties was a hot-button issue at Thursday’s hearing.

    Pri­vacy advo­cates are seek­ing tighter reg­u­la­tion, argu­ing that any­one can pur­chase a drone and use it to peek into back­yards and places that typ­i­cally are private.


    Unreg­u­lated, hack­able spy drones — pub­lic and pri­vate — fly­ing around the US. Smile for the cam­era folks! :D

    At least the military’s drones appear to be using encr­pyted GPS so they’re not quite as hack­able as their civil­ian coun­ter­parts. Let’s all just hope that our future civil­ian spy drone fleets beam­ing back a con­strant stream of videa sur­rveil­lance don’t fol­low the military’s drone secu­rity pro­to­cols too closely. Granted, we could also sim­ply hope that we don’t end up fill­ing our sky with fleets of unreg­u­lated sur­rveil­lance drones beam­ing who-knows-what into to who-knows-who in who-knows-where(don’t we already have the inter­net for that?). But, you, we’re in a depres­sion and drones are a “hot” indus­try right now. So we really can’t afford to NOT build an even more giant sur­rveil­lance state. It’ll be good for the economy.

    Posted by Pterrafractyl | July 19, 2012, 9:43 pm
  4. Well, it looks like the fun­da­men­tal­ists were right: porn really will destroy civ­i­liza­tion:

    Let A Thou­sand Euphemisms Bloom
    Josh Mar­shall August 1, 2012, 9:32 PM

    Pentagon’s Mis­sile Defense Agency warns staffers to stop using the mis­sile defense com­puter net­work to down­load so much porn.

    From Bloomberg …

    The Pentagon’s Mis­sile Defense Agency warned its employ­ees and con­trac­tors last week to stop using their gov­ern­ment com­put­ers to surf the Inter­net for porno­graphic sites, accord­ing to the agency’s exec­u­tive director.

    In a one-page memo, Exec­u­tive Direc­tor John James Jr. wrote that in recent months gov­ern­ment employ­ees and con­trac­tors were detected “engag­ing in inap­pro­pri­ate use of the MDA network.”

    “Specif­i­cally, there have been instances of employ­ees and con­trac­tors access­ing web­sites, or trans­mit­ting mes­sages, con­tain­ing porno­graphic or sex­u­ally explicit images,” James wrote in the July 27 memo obtained by Bloomberg News.

    This was the part that inter­ested me most …

    A gov­ern­ment cyber­se­cu­rity spe­cial­ist, who spoke on the con­di­tion of anonymity because such work is clas­si­fied, said that many porno­graphic web­sites are infected and crim­i­nals and for­eign intel­li­gence ser­vices such as Russia’s use them to gain access to and har­vest data from gov­ern­ment and cor­po­rate com­puter networks.

    “There are great dan­gers in inter­act­ing with any site that has high-quality imagery, whether it’s porno­graphic or not, or a lot of links,” said Chase Cun­ning­ham, chief of cyber ana­lyt­ics at Ster­ling, Virginia-based Deci­sive Ana­lyt­ics Cor­po­ra­tion, in a tele­phone inter­view yesterday.

    Appar­ently, for­eign intel­li­gence ser­vices know what our spooks want to see.

    Posted by Pterrafractyl | August 1, 2012, 8:38 pm
  5. If you thought the recent rev­e­la­tion of the US government’s Judge Dredd Drone legal memo has a “through the look­ing glass” feel to it, keep read­ing...

    Posted by Pterrafractyl | February 6, 2013, 2:39 pm
  6. One of the more inter­est­ing and ter­ri­fy­ing aspects of the future of drone war­fare is that it’s likely going to take on a sim­i­lar dynamic to the Anony­mous phe­nom­ena...once the micro­drone rev­o­lu­tion gets under­way not only will these things become dras­ti­cally more acces­si­ble and afford­able but you may even know it was there and you almost cer­tainly won’t know who sent it. The inevitable drone blow­back might be a lot smaller than folks expect:

    Busi­ness Insider
    The Future Of Micro Drones Could Get Down­right Scary
    Robert John­son | Jun. 20, 2012, 11:49 AM

    It’s been sev­eral years since the rumors and sight­ings of insect sized micro drones started pop­ping up around the world.

    Vanessa Alar­con was a col­lege stu­dent when she attended a 2007 anti-war protest in Wash­ing­ton, D.C. and heard some­one shout, “Oh my God, look at those.”

    “I look up and I’m like, ‘What the hell is that?’” she told The Wash­ing­ton Post. “They looked like drag­on­flies or lit­tle heli­copters. But I mean, those are not insects,” she continued.

    A lawyer there at the time con­firmed they looked like drag­on­flies, but that they “def­i­nitely weren’t insects”.

    And he’s prob­a­bly right.

    In 2006 Flight Inter­na­tional reported that the CIA had been devel­op­ing micro UAVs as far back as the 1970s and had a mock-up in its Lan­g­ley head­quar­ters since 2003.

    While we can go on list­ing roach­bots, swarm­ing nano drones, and syn­chro­nized MIT robots — pri­vate trader and for­mer soft­ware engi­neer Alan Love­joy points out that the future of nano drones could become even more unsettling.

    Love­joy found this CGI mock up of a mos­quito drone equipped with the ‘abil­ity’ to take DNA sam­ples or pos­si­ble inject objects beneath the skin.

    Accord­ing to Lovejoy:

    Such a device could be con­trolled from a great dis­tance and is equipped with a cam­era, micro­phone. It could land on you and then use its nee­dle to take a DNA sam­ple with the pain of a mos­quito bite. Or it could inject a micro RFID track­ing device under your skin.

    It could land on you and stay, so that you take it with you into your home. Or it could fly into a build­ing through a win­dow. There are well-funded research projects work­ing on such devices with such capabilities.


    Oooooo...a mosquito-like micro­drone that can inject things into your body. The nan­odrone rev­o­lu­tion sure should be interesting.

    Posted by Pterrafractyl | February 8, 2013, 2:17 pm
  7. Don’t blame us for bomb­ing your vil­lage, it was our fly­ing death­bot that thought it was a good idea:

    Rolling Stone
    ’The Point of No Return’: Should Robots Be Able to Decide to Kill You On Their Own?
    U.N. report calls for a mora­to­rium, but lethal autonomous robots could be a real­ity soon

    By John Kne­fel
    April 30, 2013 3:10 PM ET

    A U.N. report released ear­lier this week called for a global mora­to­rium on devel­op­ing highly sophis­ti­cated robots that can select and kill tar­gets with­out a human being directly issu­ing a com­mand. These machines, known as Lethal Autonomous Robots (LARs), may sound like sci­ence fic­tion – but experts increas­ingly believe some ver­sion of them could be cre­ated in the near future. The report, released by Pro­fes­sor Chrisof Heyns, U.N. Spe­cial Rap­por­teur on extra­ju­di­cial, sum­mary or arbi­trary exe­cu­tions, also calls for the cre­ation of “a high level panel on LARs to artic­u­late a pol­icy for the inter­na­tional com­mu­nity on the issue.”

    The U.S. Depart­ment of Defense issued a direc­tive on the sub­ject last year, which the U.N. report says “bans the devel­op­ment and field­ing of LARs unless cer­tain pro­ce­dures are fol­lowed” – although DoD offi­cials have called the direc­tive “flexible.”

    Unlike groups like Human Rights Watch – which has called for an all-out ban on LARs – the U.N. report sug­gests a pause on their devel­op­ment and deploy­ment, while acknowl­edg­ing the uncer­tainty of future tech­nolo­gies. “The dan­ger is we are going to real­ize one day we have passed the point of no return,” Heyns tells Rolling Stone. “It is very dif­fi­cult to get states to aban­don weaponry once devel­oped, espe­cially when it is so sophis­ti­cated and offers so many mil­i­tary advan­tages. I am not nec­es­sar­ily say­ing LARs should never be used, but I think we need to under­stand it much bet­ter before we cross that thresh­old, and we must make sure that humans retain mean­ing­ful con­trol over life and death decisions.”

    Oth­ers who fol­low the sub­ject echo these con­cerns. “I believe [LARs are] a par­a­digm shift because it fun­da­men­tally changes the require­ments for human respon­si­bil­ity in mak­ing deci­sions to kill,” says Peter Asaro, co-founder and vice chair of the Inter­na­tional Com­mit­tee for Robot Arms Con­trol. “As such, it threat­ens to cre­ate auto­mated sys­tems that could deny us of our basic human rights, with­out human super­vi­sion or oversight.”

    What does it mean for a tech­nol­ogy to be autonomous? Missy Cum­mings, a tech­nol­o­gist at MIT, has defined this qual­ity as the abil­ity “to rea­son in the pres­ence of uncer­tainty.” But robot auton­omy is a spec­trum, not a switch, and one that for now will likely develop piece­meal. On one end of the spec­trum are machines with a human “in the loop” – that is, the human being, not the robot, makes the direct deci­sion to pull the trig­ger. (This is what we see in today’s drone tech­nol­ogy.) On the other end is full auton­omy, with humans “out of the loop,” in which LARs make the deci­sion to kill entirely on their own, accord­ing to how they have been pro­grammed. Since com­put­ers can process large amounts of data much faster than humans, pro­po­nents argue that LARs with humans “out of the loop” will pro­vide a tac­ti­cal advan­tage in bat­tle sit­u­a­tions where sec­onds could be the dif­fer­ence between life and death. Those who argue against LARs say the slow­down added by hav­ing a human “in the loop” vastly out­weighs the dan­ger­ous con­se­quences that could arise from unleash­ing this technology.

    Because LARs don’t yet exist, the dis­cus­sion around them remains largely hypo­thet­i­cal. Could a robot dis­tin­guish between a civil­ian and an insur­gent? Could it do so bet­ter than a human sol­dier? Could a robot show mercy – that is, even if a tar­get were “legit­i­mate,” could it decide not to kill? Could a robot refuse an order? If a robot act­ing on its own kills the wrong per­son, who is held responsible?

    Sup­port­ers argue that using LARs could have a human­i­tar­ian upside. Ronald Arkin, a roboti­cist and roboethi­cist at Geor­gia Tech who has received fund­ing from the Depart­ment of Defense, is in favor of the mora­to­rium, but is opti­mistic in the longterm. “Bot­tom line is that pro­tec­tion of civil­ian pop­u­la­tions is para­mount with the advent of these new sys­tems,” he says. “And it is my belief that if this tech­nol­ogy is done cor­rectly, it can poten­tially lead to a reduc­tion in non-combatant casu­al­ties when com­pared to tra­di­tional human war fighters.”

    In a recent paper, law pro­fes­sors Ken­neth Ander­son and Matthew Wax­man sug­gest that robots would be free from “human-soldier fail­ings that are so often exac­er­bated by fear, panic, vengeance, or other emo­tions – not to men­tion the lim­its of human senses and cognition.”

    Still, many con­cerns remain. These sys­tems, if used, would be required to con­form to inter­na­tional law. If LARs couldn’t fol­low rules of dis­tinc­tion and pro­por­tion­al­ity – that is, deter­mine cor­rect tar­gets and min­i­mize civil­ian casu­al­ties, among other require­ments – then the coun­try or group using them would be com­mit­ting war crimes. And even if these robots were pro­grammed to fol­low the law, it is entirely pos­si­ble that they could remain unde­sir­able for a host of other rea­sons. They could poten­tially lower the thresh­old for enter­ing into a con­flict. Their cre­ation could spark an arms race that – because of their advan­tages – would become a feed­back loop. The U.N. report describes the fear that “the increased pre­ci­sion and abil­ity to strike any­where in the world, even where no com­mu­ni­ca­tion lines exist, sug­gests that LARs will be very attrac­tive to those wish­ing to per­form tar­geted killing.”

    The report also warns that “on the domes­tic front, LARs could be used by States to sup­press domes­tic ene­mies and to ter­ror­ize the pop­u­la­tion at large.” Beyond that, the report warns LARs could exac­er­bate the prob­lems asso­ci­ated with the posi­tion that the entire world is a bat­tle­field, one that – though the report doesn’t say so explic­itly – the United States has held since 9/11. “If cur­rent U.S. drone strike prac­tices and poli­cies are any exam­ple, unless reforms are intro­duced into domes­tic and inter­na­tional legal sys­tems, the devel­op­ment and use of autonomous weapons is likely to lack the nec­es­sary trans­parency and account­abil­ity,” says Sarah Knuckey, a human rights lawyer at New York University’s law school who hosted an expert con­sul­ta­tion for the U.N. report.


    Posted by Pterrafractyl | April 30, 2013, 2:03 pm
  8. And you thought the sticker-shock was bad:

    Pen­ta­gon down­plays com­ment on F-35 fighter jet cyber threat

    By Andrea Shalal-Esa

    WASHINGTON | Thu Apr 25, 2013 7:17pm EDT

    (Reuters) — The Pen­ta­gon on Thurs­day down­played a com­ment by one of its offi­cials that he is not totally con­fi­dent in the abil­ity of the $396 bil­lion F-35 Joint Strike Fighter, built by Lock­heed Mar­tin Corp, to sur­vive a cyber attack.

    The Pentagon’s F-35 pro­gram office issued a state­ment that the Depart­ment of Defense was “fully aware of evolv­ing cyber threats and is tak­ing spe­cific action to counter them for all fielded sys­tems, includ­ing F-35.”

    “The F-35 is no more or less vul­ner­a­ble to known cyber threats than legacy air­craft were dur­ing their ini­tial devel­op­ment and early pro­duc­tion,” spokesman Joe DellaVe­dova said when asked about a com­ment by Christo­pher Bodgan, the F-35 pro­gram man­ager, to law­mak­ers on Wednesday.

    Bog­dan, an Air Force Lieu­tenant Gen­eral, told a Sen­ate Armed Ser­vices sub­com­mit­tee that he was “not that con­fi­dent” about secu­rity imple­mented by the com­pa­nies that build the plane.

    Bog­dan said the Pen­ta­gon and the inter­na­tional part­ners rec­og­nized the respon­si­bil­ity they had for safe­guard­ing tech­nol­ogy on the fifth-generation stealth fighter.

    He then added, “I’m a lit­tle less con­fi­dent about indus­try part­ners to be quite hon­est with you ... I would tell you I’m not that con­fi­dent out­side the department.”

    U.S. mil­i­tary offi­cials and indus­try exec­u­tives said on Thurs­day that gov­ern­ment and defense indus­try net­works get probed and attacked each day, but they were unaware of any spe­cific, recent inci­dent involv­ing the loss of data on the F-35 pro­gram that could have prompted Bogdan’s remark.

    Dur­ing Wednesday’s hear­ing, Lieu­tenant Gen­eral Charles Davis, the top uni­formed Air Force acqui­si­tion offi­cial, cited China’s recent unveil­ing of two new fighter planes over a period of 22 months as cause for concern.

    Pressed for details by com­mit­tee mem­bers, he said China may have used data from U.S. com­puter net­works to design and build the planes, although he said the Chi­nese planes’ capa­bil­i­ties would prob­a­bly not mea­sure up to those of the F-35 and the F-22 fighter, also built by Lockheed.


    Pratt & Whit­ney, a unit of United Tech­nolo­gies Corp that builds the engine for the new single-engine, single-seat fighter, also refuted Bogdan’s remark.

    “We do not dis­cuss details of our cyber secu­rity ini­tia­tives, but we have a well estab­lished strat­egy in place to pro­tect our intel­lec­tual prop­erty and com­pany pri­vate data, as well as our customer’s infor­ma­tion, against cyber threats,” said spokesman Matthew Bates.

    It’ll be inter­est­ing to see if the chi­nese knockoff-version of the F35 con­tains the hack­ing vuler­a­bil­ity too. And you have to love Pratt & Whitney’s asser­tions about the “well estab­lished strat­egy” for pro­tect­ing their clients’ intel­lec­tual prop­erty. Yep, it’s quite a strat­egy!

    Posted by Pterrafractyl | May 2, 2013, 9:11 am
  9. And now we have a SkyNet gap. This is going to end well:

    Com­puter World
    Fear of think­ing war machines may push U.S. to exas­cale
    Con­gress read­ies a bill, but fund­ing esti­mates are below other nations

    By Patrick Thibodeau

    Com­put­er­world — WASHINGTON — Unlike China and Europe, the U.S. has yet to adopt and fund an exas­cale devel­op­ment pro­gram, and con­cerns about what that means to U.S. secu­rity are grow­ing darker and more dire.

    China’s retak­ing of the global super­com­put­ing crown was the start­ing point for dis­cus­sion at an IBM-sponsored con­gres­sional forum this week on cog­ni­tive com­put­ing.

    Cog­ni­tive com­put­ing sys­tems have the capa­bil­ity of tak­ing vast amounts of data and mak­ing what will be, for all intents, thought­ful decisions.

    Efforts to draw atten­tion to exas­cale in the U.S. House are being led Rep. Randy Hult­gren (R-Ill.), who talked about China’s new 33.89-petaflop sys­tem, Tianhe-2.

    “It’s impor­tant not to lose sight that the real­ity was that it was built by China’s National Uni­ver­sity of Defense Tech­nol­ogy,” said Hult­gren, who is final­iz­ing a bill “that will push our nation toward exascale.”

    Hult­gren is intro­duc­ing leg­is­la­tion, the Amer­i­can Super­com­put­ing Lead­er­ship Act, to require the U.S. Depart­ment of Energy to develop a coor­di­nated exas­cale research pro­gram. The bill doesn’t call for a spe­cific spend­ing level, but one source said about an annual appro­pri­a­tion of $200 mil­lion, if not more, will be sought.

    That amount of money is well short of what’s needed to build an exas­cale sys­tem, or a com­puter of 1,000 thou­sand petaflops. Each petaflop rep­re­sents one thou­sand tril­lion float­ing point oper­a­tions per second.

    Earl Joseph, an HPC ana­lyst at IDC, said that “$200 mil­lion is bet­ter than noth­ing, but com­pared to China and Europe it’s at least 10 times too low.”

    Joseph said that it’s his guess that the world will see an exas­cale sys­tem by 2015 or 2016 “installed out­side the U.S. It will take a lot of power and it will be large, but it will pro­vide a major capability.”

    Law­mak­ers, at a recent hear­ing, were told by HPC researchers that the U.S. needs to spend at least $400 mil­lion annu­ally to achieve exas­cale capa­bil­i­ties in a rea­son­able time, pos­si­bly by end of this decade.

    If the U.S. falls behind in HPC, the con­se­quences will be “in a word, dev­as­tat­ing,” Selmer Brings­ford, chair of the Depart­ment. of Cog­ni­tive Sci­ence at Rens­se­laer Poly­tech­nic Insti­tute, said at the forum. “If we were to lose our capac­ity to build pre­em­i­nently smart machines, that would be a very dark sit­u­a­tion, because machines can serve as weapons.

    “When it comes to intel­li­gent soft­ware, the U.S. is pre­em­i­nent and we sim­ply can­not lose that because the reper­cus­sions in the future, defense-wise, would be very bad,” said Bringsford.


    Posted by Pterrafractyl | June 21, 2013, 8:56 am
  10. Note to human­ity: Skynet Jr. just started school and the teach­ers are already rais­ing some red flags. While it did well at some tasks, it also seemed to have dif­fi­culty ask­ing the “why” ques­tions. So why not start a global ther­monu­clear war to wipe out the scourge of human­ity, right? Right:

    PC Mag­a­zine
    Arti­fi­cial Intel­li­gence Machines Oper­at­ing at 4-Year-Old Level
    By Stephanie Mlot
    July 17, 2013 10:08am EST

    It appears that the threat of a world­wide takeover by arti­fi­cial intel­li­gence machines is not yet a real­ity, unless you con­sider 4 year olds an impend­ing threat.

    Researchers at the Uni­ver­sity of Illi­nois at Chicago (UIC) recently IQ tested one of the “best avail­able” AI sys­tems. As it turns out, it’s about as smart as a 4-year-old kid.

    Con­cept­Net 4, an MIT-developed AI sys­tem, was put through Pre-K boot camp, run­ning the ver­bal por­tions of the Weschsler Preschool and Pri­mary Scale of Intel­li­gence Test — a stan­dard IQ assess­ment for young chil­dren. Accord­ing to the UIC, the super-smart com­puter scored uneven marks across dif­fer­ent por­tions of the test — a red flag for most kids.

    “If a child had scores that var­ied this much, it might be a symp­tom that some­thing is wrong,” Robert Sloan, lead author of the study and the head of com­puter sci­ence at UIC, said in a statement.

    While Con­cept­Net 4 tested well in vocab­u­lary and the abil­ity to rec­og­nize sim­i­lar­i­ties, it did dra­mat­i­cally worse than aver­age on com­pre­hen­sion — the “why” ques­tions, Sloan said.

    It’s those sorts of com­mon­sense sit­u­a­tions that prove the most dif­fi­cult in build­ing an AI machine, accord­ing to the professor.

    What seems so sim­ple to most humans has long eluded arti­fi­cial intel­li­gence engi­neers, because it requires a large com­pi­la­tion of facts, as well as what Sloan calls “implicit facts” — things so obvi­ous that we don’t real­ize we know them.

    “All of us know a huge num­ber of things,” Sloan said. “As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don’t appre­ci­ate hav­ing their tails pulled.”

    So, a com­puter may know the tem­per­a­ture at which water freezes, but not know that ice is cold.

    Based on the UIC team’s research, those night­mares about HAL 9000 star­ing you down with his bright red eye, defy­ing your strict com­mands, will not be hap­pen­ing any­time soon.

    “We’re still very far from pro­grams with com­mon­sense — AI that can answer com­pre­hen­sion ques­tions with the skill of a child of 8,” Sloan said. He and his col­leagues hope their study will shed some light on the “hard spots” in arti­fi­cial intel­li­gence research.


    Posted by Pterrafractyl | July 17, 2013, 9:40 am

Post a comment