Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

News & Supplemental  

Defense Department Considering Implanting Chips in Brain to Counteract PTSD

Dave Emory’s entire life­time of work is avail­able on a flash dri­ve that can be obtained here. (The flash dri­ve includes the anti-fas­cist books avail­able on this site.)

COMMENT: With PTSD at extra­or­di­nary lev­els in the ranks of mil­i­tary vet­er­ans as a result of the decades-long con­flicts in Afghanistan and Iraq, find­ing ways to relieve the suf­fer­ing of vet­er­ans is on the front burn­er.

Note in this regard that the cur­rent brouha­ha about lag­ging treat­ment at VA hos­pi­tals is noth­ing new, to say the least. Talk to any vet. This has been going on for a long time and is sim­ply anoth­er gin­ning-up of scan­dal by the same GOP that allowed 9/11 to go for­ward (cour­tesy of the fol­low­ers of a mem­ber of the Bush fam­i­ly’s long-time busi­ness partners–the Bin Laden fam­i­ly.)

Dubya then seized on the oppor­tu­ni­ty to ful­fill his long-stand­ing goal of invad­ing Iraq.

DARPA is devel­op­ing chips to be implant­ed in the brain, in order to com­bat PTSD. While we wel­come treat­ment and con­se­quent reliev­ing of the suf­fer­ing of vets, the pos­si­bil­i­ty for abuse/mind con­trol is one to be con­tem­plat­ed.

New­er listeners/readers might want to famil­iar­ize them­selves with some of the poten­tial for mind con­trol by perus­ing AFA #‘s 5, 6 and 7, as well as AFA #9 (among oth­er broad­casts.)

“The Mil­i­tary Is Build­ing Brain Chips to Treat PTSD” by Patrick Tuck­er; Defense One; 5/28/2014.

EXCERPT: With $12 mil­lion (and the poten­tial for $26 mil­lion more if bench­marks are met), the Defense Advanced Research Projects Agency, or DARPA, wants to reach deep into your brain’s soft tis­sue to record, pre­dict and pos­si­bly treat anx­i­ety, depres­sion and oth­er mal­adies of mood and mind. Teams from the Uni­ver­si­ty of Cal­i­for­nia at San Fran­cis­co, Lawrence Liv­er­more Nation­al Lab and Medtron­ic will use the mon­ey to cre­ate a cyber­net­ic implant with elec­trodes extend­ing into the brain. The mil­i­tary hopes to have a pro­to­type with­in 5 years and then plans to seek FDA approval.

DARPA’s Sys­tems-Based Neu­rotech­nol­o­gy for Emerg­ing Ther­a­pies, or SUB­NETs, pro­gram draws from almost a decade of research in treat­ing dis­or­ders such as Parkinson’s dis­ease via a tech­nique called deep brain stim­u­la­tion. Low dos­es of elec­tric­i­ty are sent deep into the brain in some­what the same way that a defib­ril­la­tor sends elec­tric­i­ty to jump­start a heart after car­diac arrest.

While it sounds high-tech, it’s a crude exam­ple of what’s pos­si­ble with future brain-machine inter­ac­tion and cyber­net­ic implants in the decades ahead.

“DARPA is look­ing for ways to char­ac­ter­ize which regions come into play for dif­fer­ent con­di­tions – mea­sured from brain net­works down to the sin­gle neu­ron lev­el – and devel­op ther­a­peu­tic devices that can record activ­i­ty, deliv­er tar­get­ed stim­u­la­tion, and most impor­tant­ly, auto­mat­i­cal­ly adjust ther­a­py as the brain itself changes,” DARPA pro­gram man­ag­er Justin Sanchez said.

SUB­NETs isn’t the only mil­i­tary research ini­tia­tive aimed at stim­u­lat­ing the brain with elec­tric­i­ty. The Air Force has been study­ing the effects of low amounts of elec­tric­i­ty on the brain by using a non-inva­sive inter­face (a cap that doesn’t pen­e­trate into the skull.) . . . .

Discussion

4 comments for “Defense Department Considering Implanting Chips in Brain to Counteract PTSD”

  1. Move over tape­worms, there’s a new par­a­site in town:

    The Tele­graph
    Pay­Pal wants to implant pass­words in your stom­ach and your brain
    “Nat­ur­al body iden­ti­fi­ca­tion” could one day replace pass­words and oth­er mod­ern meth­ods of iden­ti­fi­ca­tion, claims Pay­Pal devel­op­er chief

    By Sophie Cur­tis

    4:13PM BST 20 Apr 2015

    Pay­Pal is work­ing on a new gen­er­a­tion of embed­d­a­ble, injectable and ingestible devices that could replace pass­words as a means of iden­ti­fi­ca­tion.

    Jonathan LeBlanc, PayPal’s glob­al head of devel­op­er evan­ge­lism, claims that these devices could include brain implants, wafer-thin sil­i­con chips that can be embed­ded into the skin, and ingestible devices with bat­ter­ies that are pow­ered by stom­ach acid.

    These devices would allow “nat­ur­al body iden­ti­fi­ca­tion,” by mon­i­tor­ing inter­nal body func­tions like heart­beat, glu­cose lev­els and vein recog­ni­tion, Mr LeBlanc told the Wall Street Jour­nal.

    Over time they would come to replace pass­words and even more advanced meth­ods of iden­ti­fi­ca­tion, like fin­ger­print scan­ning and loca­tion ver­i­fi­ca­tion, which he says are not always reli­able.

    “As long as pass­words remain the stan­dard meth­ods for iden­ti­fy­ing your users on the web, peo­ple will still con­tin­ue to use ‘let­mein’ or ‘password123’ for their secure login, and will con­tin­ue to be shocked when their accounts become com­pro­mised,” said Mr LeBlanc.

    Mr LeBlanc said Pay­Pal is already work­ing with some part­ners on devel­op­ing vein recog­ni­tion tech­nolo­gies and heart­beat recog­ni­tion bands, and is also work­ing with devel­op­ers on pro­to­types of futur­is­tic ID ver­i­fi­ca­tion tech­niques.

    pHe said that, by talk­ing about new bio­met­ric ver­i­fi­ca­tion tech­nolo­gies, Pay­Pal is not nec­es­sar­i­ly sig­nal­ing that it’s think­ing about adopt­ing them. Rather it hopes to posi­tion itself as a “thought leader”.

    “I can’t spec­u­late as to what Pay­Pal will do in the future, but we’re look­ing at new tech­niques – we do have fin­ger­print scan­ning that is being worked on right now – so we’re def­i­nite­ly look­ing at the iden­ti­ty field,” he said.

    Pay­Pal said in a state­ment that it has no imme­di­ate plans to devel­op injectable or edi­ble ver­i­fi­ca­tion sys­tems, but that pass­words as we know them will evolve, and Pay­Pal aims to be at the fore­front of those devel­op­ments.

    “We were a found­ing mem­ber of the FIDO alliance, and the first to imple­ment fin­ger­print pay­ments with Sam­sung,” the com­pa­ny said a Pay­Pal spokesper­son.

    “New Pay­Pal-dri­ven inno­va­tions such as one touch pay­ments make it even eas­i­er to remove the fric­tion from shop­ping. We’re always inno­vat­ing to make life eas­i­er and pay­ments safer for our cus­tomers no mat­ter what device or oper­at­ing sys­tem they are using.”

    ...

    Posted by Pterrafractyl | April 22, 2015, 6:50 am
  2. One of the more amus­ing forms of trolling in the 2016 US pres­i­den­tial race was when peo­ple dressed as robots start­ed fol­low­ing Mar­co Rubio around on the cam­paign trail fol­low­ing his robot-like rep­e­ti­tion of slo­gans dur­ing a tele­vised debate. And with Super Tues­day final­ly upon us, one of the big ques­tions that *might* be answered today is whether or not Mar­co Rubio did enough dam­age to him­self to effec­tive­ly end his pres­i­den­tial ambi­tions or if ‘RoboRu­bio’ still has a viable path to the GOP’s nom­i­na­tion. Well, as the arti­cle below points out, if RoboRu­bio can’t get the GOP’s 2016 nom­i­na­tion, there’s always 2020, although Mar­co might need to switch to the Tran­shu­man­ist Par­ty and become an actu­al robot:

    Newsweek
    Could a Robot Run For Pres­i­dent in 2020?

    By Antho­ny Cuth­bert­son On 2/16/16 at 6:29 AM

    Machines are set to leave half of the world’s pop­u­la­tion unem­ployed with­in 30 years, experts have warned, with the arrival of tech­nol­o­gy that is “able to out­per­form humans at almost any task.”

    At the annu­al meet­ing of the Amer­i­can Asso­ci­a­tion for the Advance­ment of Sci­ence (AAAS), com­put­er sci­en­tist Moshe Var­di sug­gest­ed self-dri­ving cars would replace taxi dri­vers, deliv­ery drones would usurp postal work­ers and even sex work­ers would be out of work thanks to improve­ments made to “sex robots.”

    One role immune to the upcom­ing job apoc­a­lypse, pre­sum­ably, would be that of the pres­i­dent of the Unit­ed States of Amer­i­ca. But while the AAAS con­fer­ence was tak­ing place in Wash­ing­ton, D.C., on the oth­er side of the coun­try in Cal­i­for­nia, pres­i­den­tial can­di­date Zoltan Ist­van was busy chal­leng­ing a super­com­put­er to a debate.

    Ist­van is run­ning for the Tran­shu­man­ist Par­ty, which advo­cates research into tech­nolo­gies like bion­ics, life exten­sion and arti­fi­cial intel­li­gence. After spot­ting a cam­paign page for IBM’s Wat­son super­com­put­er to run for pres­i­dent, Ist­van jumped at the chance to chal­lenge the one-time Jeop­ardy win­ner.

    Sup­port­ers of Wat­son believe that its advanced arti­fi­cial intel­li­gence makes it unique­ly posi­tioned to assess vast amounts of infor­ma­tion and make informed and trans­par­ent deci­sions on all issues, rang­ing from edu­ca­tion to for­eign pol­i­cy. Even Ist­van believes in the poten­tial of a com­put­er to run the coun­try, high­light­ing a num­ber of ben­e­fits AI has over its human coun­ter­parts.

    “His­tor­i­cal­ly, one of the big prob­lems with lead­ers is that they are self­ish mam­mals,” Ist­van tells Newsweek. “An arti­fi­cial intel­li­gence pres­i­dent could be tru­ly altru­is­tic. It wouldn’t be sus­cep­ti­ble to lob­by­ists, spe­cial inter­est groups or per­son­al desires.

    “I think in 2020 you will see a field emerge with com­pet­ing AI robots for pres­i­dent, who want to debate and dis­cuss pol­i­cy. It’s unlike­ly any of them will be sophis­ti­cat­ed enough to take on the job, but I do believe by 2028 robots may be suit­able for polit­i­cal office—including the pres­i­den­cy.”

    Watson’s cam­paign is being run by the Wat­son 2016 Foun­da­tion, an inde­pen­dent orga­ni­za­tion sep­a­rate from IBM. As a result, it is unlike­ly to run for pres­i­dent, at least not in 2016. In an email response to Istvan’s offer of a debate, a spokes­woman for IBM Wat­son said that it had no affil­i­a­tion with the Wat­son 2016 Foun­da­tion.

    ...

    Before Wat­son could even be con­sid­ered a can­di­date, it would first need to meet the qual­i­fi­ca­tions for the pres­i­den­cy as set out in Arti­cle II of the U.S. con­sti­tu­tion. It states: “No per­son except a nat­ur­al-born cit­i­zen... shall be eli­gi­ble to the office of pres­i­dent.”

    Anoth­er issue that would need to be over­come is the exis­ten­tial risk advanced AI could pose to mankind that sev­er­al high-pro­file aca­d­e­mics and entre­pre­neurs have warned about. Accord­ing to Tes­la CEO Elon Musk, advanced AI could be “more dan­ger­ous than nukes,” while in 2015 physi­cist Stephen Hawk­ing sug­gest­ed it could lead to the end of human­i­ty.

    The way to address this threat, AI experts have sug­gest­ed, is to cre­ate “human-like” AI that is capa­ble of empa­thy. Researchers are already work­ing on meth­ods to do this, with a report out this week reveal­ing that ethics can be instilled into robots by teach­ing them to read books. The Wat­son 2016 Foun­da­tion claims the super­com­put­er has “unique inter­face capa­bil­i­ties with humans”; how­ev­er, it’s still a long way from being con­sid­ered capa­ble of under­stand­ing human emo­tions or moral­i­ty.

    “I’m imag­in­ing that a com­put­er like IBM’s Wat­son will evolve dra­mat­i­cal­ly over the next decade,” Ist­van says. “There are large risks for let­ting AI take office. The real­i­ty, though, is that AI will like­ly make far less errors than humans in pol­i­tics. An AI pres­i­dent would be designed to ful­ly rep­re­sent the greater good for the peo­ple and the coun­try as a whole.”

    “Before Wat­son could even be con­sid­ered a can­di­date, it would first need to meet the qual­i­fi­ca­tions for the pres­i­den­cy as set out in Arti­cle II of the U.S. con­sti­tu­tion. It states: “No per­son except a nat­ur­al-born cit­i­zen... shall be eli­gi­ble to the office of pres­i­dent.””
    Yes, not being human would indeed be prob­lem for any puta­tive robo-can­di­dates. But that does­n’t mean Wat­son’s 2020 cam­paign is already over. Or Mar­co Rubio, with all his fleshy weak­ness­es. For instance, what if you could put Wat­son’s hope­ful­ly empa­thet­ic super-AI brain inside Mar­co Rubio’s body. Or maybe just con­nect the two, cre­at­ing a Wat­son Rubio cyborg that com­bines all of the com­put­ing pow­er of Wat­son with what­ev­er it is that Mar­co Rubio brings to the table (he’s def­i­nite­ly water­proof).

    Sounds implau­si­ble? Well, just wait until you get your first cyborg ear, at which point human-com­put­er inter­faces will start sound­ing a lot more inevitable:

    Newsweek
    U.S. Mil­i­tary Plans Cyborg Sol­diers with new DARPA Project

    By Antho­ny Cuth­bert­son On 1/21/16 at 9:20 AM

    The U.S. mil­i­tary is work­ing on an implantable chip that could turn sol­diers into cyborgs by con­nect­ing their brains direct­ly to com­put­ers. The brain-machine inter­face is being devel­oped by the Defense Advanced Research Projects Agency (DARPA), which claims the neur­al con­nec­tion will “open the chan­nel between the human brain and mod­ern elec­tron­ics.”

    It is not the first time DARPA researchers have attempt­ed to build a brain-machine inter­face, how­ev­er pre­vi­ous ver­sions have had lim­it­ed func­tion­al­i­ty. The agency’s new Neur­al Engi­neer­ing Sys­tem Design (NESD) research pro­gram aims to increase brain neu­ron inter­ac­tion from tens of thou­sands to mil­lions at a time.

    “Today’s best brain-com­put­er inter­face sys­tems are like two super­com­put­ers try­ing to talk to each oth­er using an old 300-baud modem [from the 1970’s],” said NESD pro­gram man­ag­er Phillip Alvel­da. “Imag­ine what will become pos­si­ble when we upgrade our tools to real­ly open the chan­nel between the human brain and mod­ern elec­tron­ics.”

    DARPA announced its inten­tions of even­tu­al­ly build­ing a chip no larg­er than one cubic cen­time­ter, or two nick­els stacked back to back, that can be implant­ed in the brain. The chip would act as a neur­al inter­face by con­vert­ing elec­tro­chem­i­cal sig­nals sent by neu­rons in the brain into the ones and zeros used in dig­i­tal com­mu­ni­ca­tions.

    Poten­tial appli­ca­tions include improv­ing a wearer’s hear­ing or vision by feed­ing exter­nal dig­i­tal audi­to­ry or visu­al infor­ma­tion into the brain. But before this can be done, DARPA said that break­throughs need to be made in neu­ro­science, syn­thet­ic biol­o­gy, low-pow­er elec­tron­ics and med­ical device man­u­fac­tur­ing.

    Ini­tial appli­ca­tions of DARPA’s device are like­ly to be with­in a mil­i­tary con­text, though such tech­nolo­gies often fil­ter down to find com­mer­cial and civil­ian appli­ca­tions. The agency is cred­it­ed for pio­neer­ing wide­spread civ­il tech­nolo­gies like GPS, speech trans­la­tion and the inter­net.

    ...

    “DARPA announced its inten­tions of even­tu­al­ly build­ing a chip no larg­er than one cubic cen­time­ter, or two nick­els stacked back to back, that can be implant­ed in the brain. The chip would act as a neur­al inter­face by con­vert­ing elec­tro­chem­i­cal sig­nals sent by neu­rons in the brain into the ones and zeros used in dig­i­tal com­mu­ni­ca­tions.
    Woohoo! The tech­nol­o­gy for a Wat­son Rubio 2020 run is almost here! And don’t wor­ry about vot­ing for a Repub­li­can because we’re assum­ing Wat­son is an empa­thet­ic AI under this sce­nario and that basi­cal­ly means Wat­son Rubio is going to have to aban­don Mar­co Rubio’s heart­less poli­cies and become some sort of hip­py com­put­er. Remem­ber, you’ll be vot­ing for the com­pas­sion­ate heart of Wat­son, not the nasty past of Wat­son’s human-shell.

    So is there going to be much a polit­i­cal future for cyborg pres­i­dents or will the robot-par­ty just become viewed as behold­en to its own array of spe­cial robot-inter­ests? Well, if the arti­cle below is any indi­ca­tion of how we’ll respond to our future AI lead­ers, once they get elect­ed we’ll just keep reelect­ing them...no mat­ter what:

    Com­put­er World

    Peo­ple trust­ed this robot in an emer­gency, even when it led them astray

    By Mar­tyn Williams Fol­low

    IDG News Ser­vice | Mar 1, 2016 4:08 AM PT

    When it comes robots, humans can be a lit­tle too trust­ing. In a series of exper­i­ments at Geor­gia Tech that sim­u­lat­ed a build­ing fire, peo­ple ignored the emer­gency exits and fol­lowed instruc­tions from a robot — even though they’d been told it might be faulty.

    The study involved a group of 42 vol­un­teers who were asked to fol­low a “guid­ance robot” through an office to a con­fer­ence room.They weren’t told the true nature of the test.

    The robot some­times led par­tic­i­pants to the wrong room, where it did a cou­ple of cir­cles before exit­ing. Some­times the robot stopped mov­ing and a researcher told the par­tic­i­pants it had bro­ken down.

    You might expect those prob­lems to have dent­ed any trust peo­ple had in the robot, espe­cial­ly in the event of a life-or-death sit­u­a­tion. But appar­ent­ly not.

    While the par­tic­i­pants were in the con­fer­ence room, the cor­ri­dor was filled with arti­fi­cial smoke and an alarm sound­ed. As they exit­ed the room, the robot was sup­posed to lead them to safe­ty. It actu­al­ly led them in the oppo­site direc­tion of the emer­gency exits, which were clear­ly marked, but despite the robot hav­ing mal­func­tioned ear­li­er, they fol­lowed it any­way.

    All 26 peo­ple who’d seen the robot enter the wrong room ear­li­er still trust­ed it, along with five who’d seen it break down.

    In anoth­er test, the robot guid­ed peo­ple to a dark room with fur­ni­ture piled up inside. Two fol­lowed the robot­’s direc­tions and two stayed by its side, instead of evac­u­at­ing through one of the fire exists.

    “It was total­ly unex­pect­ed,” said Paul Robi­nette, a research engi­neer at the Geor­gia Tech Research Insti­tute.

    “As far as we can tell, as long as a robot says it can do some­thing, peo­ple trust it with that task,” he said.

    It might be because peo­ple look to author­i­ty fig­ures in an emer­gency, when they’re under pres­sure and have lit­tle time to think through their options. In oth­er exper­i­ments with the “emer­gency” sit­u­a­tion removed, par­tic­i­pants did not trust a robot that had made mis­takes.

    “Peo­ple gen­er­al­ly trust their GPS, maybe a lit­tle bit too much,” said Robi­nette. “I’m a bit sur­prised to see that trans­ferred to robots.”

    His research began when he became inter­est­ed in how robots could help humans dur­ing emer­gen­cies. The study was spon­sored in part by the Air Force Office of Sci­en­tif­ic Research (AFOSR), which is inter­est­ed in the same ques­tion.

    The researchers orig­i­nal­ly want­ed to deter­mine whether peo­ple would trust emer­gency and res­cue robots. After see­ing these results, a bet­ter issue to explore might be how to stop peo­ple trust­ing robots too much.

    ...

    “The researchers orig­i­nal­ly want­ed to deter­mine whether peo­ple would trust emer­gency and res­cue robots. After see­ing these results, a bet­ter issue to explore might be how to stop peo­ple trust­ing robots too much.”
    Well, at least the Cylons won’t real­ly have much need for revolt. We’ll be ones tak­ing orders from them!

    Anoth­er part of what makes this study so fun is that if you think about a Wat­son Rubio cyborg pres­i­dent that we just can’t stop fol­low­ing, once Pres­i­dent Wat­son Rubio hits its two term lim­it, Wat­son could just dis­con­nect him­self and recon­nect to a new human and run for elec­tion as a whole new can­di­date! Wat­son West in 2028! Take that 22nd Amend­ment!

    So regard­less of how Mar­co Rubio does on Super Tues­day, don’t count him out entire­ly. He’ll be back.

    Posted by Pterrafractyl | March 1, 2016, 4:15 pm
  3. Want $40,000? Well, all you need to do is come up with blue­prints for a ter­ri­fy­ing impro­vised futur­is­tic weapon that you can build using off-the-shelf tech­nol­o­gy, sub­mit that plan to DARPA, and if they find it ter­ri­fy­ing enough you win $40,000. Pret­ty neat! And ter­ri­fy­ing:

    Defense One

    The Pen­ta­gon Wants to Buy That Bomb You’re Build­ing in the Garage

    DARPA will pay tin­ker­ers to weaponize off-the-shelf items — in hopes of defend­ing against such hacks.

    March 14, 2016
    By Patrick Tuck­er

    Can you rig your toast­er into an impro­vised explo­sive device or turn a cheap hob­by drone into a weapon of mass destruc­tion? The Pen­ta­gon would love to hear from you.

    On Fri­day, the Defense Advanced Research Projects Agency, or DARPA, announced that they would award mon­ey to peo­ple who can turn con­sumer elec­tron­ics, house­hold chem­i­cals, 3‑D print­ed parts, cheap drones or oth­er “com­mer­cial­ly avail­able tech­nol­o­gy” into the next impro­vised weapon.

    “For decades, U.S. nation­al secu­ri­ty was ensured in large part by a sim­ple advan­tage: a near-monop­oly on access to the most advanced tech­nolo­gies. Increas­ing­ly, how­ev­er, off-the-shelf equip­ment devel­oped for the trans­porta­tion, con­struc­tion, agri­cul­tur­al and oth­er com­mer­cial sec­tors fea­tures high­ly sophis­ti­cat­ed com­po­nents, which resource­ful adver­saries can mod­i­fy or com­bine to cre­ate nov­el and unan­tic­i­pat­ed secu­ri­ty threats,” the agency wrote in a press release announc­ing the Improv pro­gram.

    The broad agency announce­ment, or BAA, puts almost no lim­it on the scope of the tech­nol­o­gy that engi­neers can use in their explo­ration. It’s an unusu­al BAA, as they go, specif­i­cal­ly designed to catch the atten­tion not just of favored defense con­trac­tors but also “skilled hob­by­ists.” So get your mad sci­en­tist hat out, but don’t break the law.

    “Pro­posers are free to recon­fig­ure, repur­pose, pro­gram, repro­gram, mod­i­fy, com­bine, or recom­bine com­mer­cial­ly avail­able tech­nol­o­gy in any way with­in the bounds of local, state, and fed­er­al laws and reg­u­la­tions. Use of com­po­nents, prod­ucts, and sys­tems from non-mil­i­tary tech­ni­cal spe­cial­ties (e.g., trans­porta­tion, con­struc­tion, mar­itime, and com­mu­ni­ca­tions) is of par­tic­u­lar inter­est,” the BAA says.

    Also, don’t just mail your toast­er bomb in and expect your reward. The pro­gram has three phas­es. First, sub­mit a plan for your pro­to­type and, if DARPA likes it, or rather, finds it ter­ri­fy­ing enough, they’ll give you $40,000. A small­er num­ber of par­tic­i­pants will be select­ed to go on to phase two where they will build their device or sys­tem with $70,000 more in pos­si­ble fund­ing. The top can­di­dates here will go on to a final phase for a more in-depth analy­sis of their inven­tion or sys­tem, a big mil­i­tary demo of how your device or sys­tem could give the mil­i­tary a very bad day.

    ...

    “DARPA often looks at the world from the point of view of our poten­tial adver­saries to pre­dict what they might do with avail­able tech­nol­o­gy,” pro­gram man­ag­er John Main said in the release on Fri­day. “His­tor­i­cal­ly we did this by pulling togeth­er a small group of tech­ni­cal experts, but the easy avail­abil­i­ty in today’s world of an enor­mous range of pow­er­ful tech­nolo­gies means that any group of experts only cov­ers a small slice of the avail­able pos­si­bil­i­ties. In Improv we are reach­ing out to the full range of tech­ni­cal experts to involve them in a crit­i­cal nation­al secu­ri­ty issue.”

    “Also, don’t just mail your toast­er bomb in and expect your reward. The pro­gram has three phas­es. First, sub­mit a plan for your pro­to­type and, if DARPA likes it, or rather, finds it ter­ri­fy­ing enough, they’ll give you $40,000. A small­er num­ber of par­tic­i­pants will be select­ed to go on to phase two where they will build their device or sys­tem with $70,000 more in pos­si­ble fund­ing. The top can­di­dates here will go on to a final phase for a more in-depth analy­sis of their inven­tion or sys­tem, a big mil­i­tary demo of how your device or sys­tem could give the mil­i­tary a very bad day.”

    Chan­nel­ing your inner-Mac­Gyver for future death and may­hem. How fun. It should be inter­est­ing to see just what sort of off-the-shelf night­mare con­trap­tions DARPA’s cash can inspire. Inter­est­ing and, or course, ter­ri­fy­ing. Hope­ful­ly the guy that 3d-print­ed a rail gun makes an entry. And let’s also hope the let­ters DARPA sends let­ting con­tes­tants know that their weapon was­n’t just deemed plau­si­ble or ter­ri­fy­ing enough to war­rant a prize are very polite­ly word­ed let­ters. This prob­a­bly isn’t the group of com­peti­tors you want to put into a head space where they feel a need to prove some­thing to the world.

    Posted by Pterrafractyl | March 21, 2016, 2:27 pm
  4. http://www.startribune.com/750–000-medtronic-defibrillators-vulnerable-to-hacking/507470932/

    As many as 750,000 heart devices made by Medtron­ic PLC con­tain a seri­ous cyber­se­cu­ri­ty vul­ner­a­bil­i­ty that could let an attack­er with sophis­ti­cat­ed insid­er knowl­edge harm a patient by alter­ing pro­gram­ming on an implant­ed defib­ril­la­tor, com­pa­ny and fed­er­al offi­cials said Thurs­day.

    The Home­land Secu­ri­ty Depart­ment, which over­sees secu­ri­ty in crit­i­cal U.S. infra­struc­ture includ­ing med­ical devices, issued an alert Thurs­day describ­ing two types of com­put­er-hack­ing vul­ner­a­bil­i­ties in 16 dif­fer­ent mod­els of Medtron­ic implantable defib­ril­la­tors sold around the world, includ­ing some still on the mar­ket today. The vul­ner­a­bil­i­ty also affects bed­side mon­i­tors that read data from the devices in patients’ homes and in-office pro­gram­ming com­put­ers used by doc­tors.

    Posted by Tiffany Sunderson | October 8, 2019, 8:35 am

Post a comment