Spitfire List Web site and blog of anti-fascist researcher and radio personality Dave Emory.

For The Record  

FTR #1076 Surveillance Valley, Part 2: Mauthausen on Our Mind

Dave Emory’s entire life­time of work is avail­able on a flash dri­ve that can be obtained HERE. The new dri­ve is a 32-giga­byte dri­ve that is cur­rent as of the pro­grams and arti­cles post­ed by the fall of 2017. The new dri­ve (avail­able for a tax-deductible con­tri­bu­tion of $65.00 or more.)

WFMU-FM is pod­cast­ing For The Record–You can sub­scribe to the pod­cast HERE.

You can sub­scribe to e‑mail alerts from Spitfirelist.com HERE.

You can sub­scribe to RSS feed from Spitfirelist.com HERE.

You can sub­scribe to the com­ments made on pro­grams and posts–an excel­lent source of infor­ma­tion in, and of, itself, HERE.

Please con­sid­er sup­port­ing THE WORK DAVE EMORY DOES.

This broad­cast was record­ed in one, 60-minute seg­ment.

Intro­duc­tion: The pro­gram begins with recap of the adap­ta­tion of IBM’s Hol­lerith machines to Nazi data com­pi­la­tion. (We con­clud­ed FTR #1075 with dis­cus­sion of this.): ” . . . . Germany’s vast state bureau­cra­cy and its mil­i­tary and rear­ma­ment pro­grams, includ­ing the country’s grow­ing con­cen­tra­tion camp/slave labor sys­tem, also required data pro­cess­ing ser­vices. By the time the U.S. offi­cial­ly entered the war in 1941, IBM’s Ger­man sub­sidiary had grown to employ 10,000 peo­ple and served 300 dif­fer­ent Ger­man gov­ern­ment agen­cies. The Nazi Par­ty Trea­sury; the SS; the War Min­istry; the Reichs­bank; the Reich­spost; the Arma­ments Min­istry; the Navy, Army and Air Force; and the Reich Sta­tis­ti­cal Office — the list of IBM’s clients went on and on.

 ” ‘Indeed, the Third Reich would open star­tling sta­tis­ti­cal venues for Hol­lerith machines nev­er before insti­tut­ed — per­haps nev­er before even imag­ined,’ wrote Edwin Black in IBM and the Holo­caust, his pio­neer­ing 2001 exposé of the for­got­ten busi­ness ties between IBM and Nazi Ger­many. ‘In Hitler’s Ger­many, the sta­tis­ti­cal and cen­sus com­mu­ni­ty, over­run with doc­tri­naire Nazis, pub­licly boast­ed about the new demo­graph­ic break­throughs their equip­ment would achieve.’  . . . .

“Demand for Hol­lerith tab­u­la­tors was so robust that IBM was forced to open a new fac­to­ry in Berlin to crank out all the new machines. At the facility’s chris­ten­ing cer­e­mo­ny, which was attend­ed by a top U.S. IBM exec­u­tive and the elite of the Nazi Par­ty, the head of IBM’s Ger­man sub­sidiary gave a rous­ing speech about the impor­tant role that Hol­lerith tab­u­la­tors played in Hitler’s dri­ve to puri­fy Ger­many and cleanse it of infe­ri­or racial stock. . . .”

In that same arti­cle, Yasha Levine notes that the Trump admin­is­tra­tion’s pro­posed changes in the 2020 cen­sus sound as though they may por­tend some­thing akin to the Nazi cen­sus of 1933: ” . . . . Based on a close read­ing of inter­nal Depart­ment of Com­merce doc­u­ments tied to the cen­sus cit­i­zen ques­tion pro­pos­al, it appears the Trump admin­is­tra­tion wants to use the cen­sus to con­struct a first-of-its-kind cit­i­zen­ship reg­istry for the entire U.S. pop­u­la­tion — a deci­sion that arguably exceeds the legal author­i­ty of the cen­sus. ‘It was deep in the doc­u­men­ta­tion that was released,’ Robert Groves, a for­mer Cen­sus Bureau direc­tor who head­ed the Nation­al Acad­e­mies com­mit­tee con­vened to inves­ti­gate the 2020 cen­sus, told me by tele­phone. ‘No one picked up on it much. But the term ‘reg­istry’ in our world means not a col­lec­tion of data for sta­tis­ti­cal pur­pos­es but rather to know the iden­ti­ty of par­tic­u­lar peo­ple in order to use that knowl­edge to affect their lives.’ Giv­en the administration’s pos­ture toward immi­gra­tion, the fact that it wants to build a com­pre­hen­sive cit­i­zen­ship data­base is high­ly con­cern­ing. To Groves, it clear­ly sig­nals ‘a bright line being crossed.’ . . .”

In the con­clu­sion to Sur­veil­lance Val­ley, Yasha Levine notes how IBM com­put­ing tech­nol­o­gy facil­i­tat­ed the Nazi slave labor oper­a­tions through­out the Third Reich. The epi­cen­ter of this was Mau­thausen.

The sys­tem­at­ic use of slave labor was cen­tral to Nazi Ger­many’s indus­tri­al infra­struc­ture: ” . . . . But in the 1930s, Mau­thausen had been a vital eco­nom­ic engine of Hitler’s geno­ci­dal plan to remake Europe and the Sovi­et Union into his own back­yard utopia. It start­ed out as a gran­ite quar­ry but quick­ly grew into the largest slave labor com­plex in Nazi Ger­many, with fifty sub-camps that spanned most of mod­ern-day Aus­tria. Here, hun­dreds of thou­sands of prisoners–mostly Euro­pean Jews but also Roma, Spaniards, Rus­sians, Serbs, Slovenes, Ger­mans, Bul­gar­i­ans, even Cubans–were worked to death. They refined oil, built fight­er air­craft, assem­bled can­nons, devel­oped rock­et tech­nol­o­gy, and were leased out to pri­vate Ger­man busi­ness­es. Volk­swa­gen, Siemens, Daim­ler-Benz, BMW, Bosch–all ben­e­fit­ed from the camp’s slave labor pool. Mau­thausen, the admin­is­tra­tive nerve cen­ter, was cen­tral­ly direct­ed from Berlin using the lat­est in ear­ly com­put­er tech­nol­o­gy: IBM punch card tab­u­la­tors. . . .”

Mau­thausen’s IBM machines were, in turn, cen­tral to Ger­man indus­try’s use of slave labor: ” . . . . the camp had sev­er­al IBM machines work­ing over­time to han­dle the big churn of inmates and to make sure there were always enough bod­ies to per­form the nec­es­sary work. These machines didn’t oper­ate in iso­la­tion but were part of a larg­er slave labor con­trol-and-account­ing sys­tem that stretched across Nazi-occu­pied Europe con­nect­ing Berlin to every major con­cen­tra­tion and labor punch card, tele­graph, tele­phone, and human couri­er. This wasn’t the auto­mat­ed type of com­put­er net­work sys­tem that the Pen­ta­gon would begin to build in the Unit­ed States just a decade lat­er, but it was an infor­ma­tion net­work nonethe­less: an electro­mechan­i­cal web that fueled and sus­tained Nazi Germany’s war machine with blaz­ing effi­cien­cy. It extend­ed beyond the labor camps and reached into the cities and towns, crunch­ing moun­tains of genealog­i­cal data to track down peo­ple with even the barest whiff of Jew­ish blood or per­ceived racial impu­ri­ty in a mad rush to ful­fill Adolf Hitler’s dri­ve to puri­fy the Ger­man peo­ple, but they made the Nazi death machine run faster and more effi­cient­ly, scour­ing the pop­u­la­tion and track­ing down vic­tims in ways that would nev­er have been pos­si­ble with­out them. . . .”

In his book–one of the most impor­tant in recent memory–Yasha Levine sets forth vital, rev­e­la­to­ry infor­ma­tion about the devel­op­ment and func­tion­ing of the Inter­net.

Born of the same over­lap­ping DARPA projects that spawned Agent Orange, the Inter­net was nev­er intend­ed to be some­thing good. Its gen­er­a­tive func­tion and pur­pose is counter-insur­gency. ” . . . . In the 1960s, Amer­i­ca was a glob­al pow­er over­see­ing an increas­ing­ly volatile world: con­flicts and region­al insur­gen­cies against US-allied gov­ern­ments from South Amer­i­ca to South­east Asia and the Mid­dle East. These were not tra­di­tion­al wars that involved big armies but gueril­la cam­paigns and local rebel­lions, fre­quent­ly fought in regions where Amer­i­cans had lit­tle pre­vi­ous expe­ri­ence. Who were these peo­ple? Why were they rebelling? What could be done to stop them? In mil­i­tary cir­cles, it was believed  that these ques­tions were of vital impor­tance to Amer­i­ca’s paci­fi­ca­tion efforts, and some argued that the only effec­tive way to answer them was to devel­op and lever­age com­put­er-aid­ed infor­ma­tion tech­nol­o­gy. The Inter­net came out of this effort: an attempt to build com­put­er sys­tems that could col­lect and share intel­li­gence, watch the world in real time, and study and ana­lyze peo­ple and polit­i­cal move­ments with the ulti­mate goal of pre­dict­ing and pre­vent­ing social upheaval. . . .”

In this land­mark vol­ume, Levine makes numer­ous points, includ­ing:

  1. The har­vest­ing of data by intel­li­gence ser­vices is PRECISELY what the Inter­net was designed to do in the first place.
  2. The har­vest­ing of data engaged in by the major tech cor­po­ra­tions is an exten­sion of the data gathering/surveillance that was–and is–the rai­son d’e­tre for the Inter­net in the first place.
  3. The big tech com­pa­nies all col­lab­o­rate with the var­i­ous intel­li­gence agen­cies they pub­licly scorn and seek to osten­si­bly dis­tance them­selves from.
  4. Edward Snow­den, the Elec­tron­ic Fron­tier Foun­da­tion, Jacob Appel­baum, the milieu of the Tor Net­work and Wik­iLeaks are com­plic­it in the data har­vest­ing and sur­veil­lance.
  5. Snow­den and oth­er pri­va­cy activists are dou­ble agents, con­scious­ly chan­nel­ing peo­ple fear­ful of hav­ing their com­mu­ni­ca­tions mon­i­tored into tech­nolo­gies that will facil­i­tate that sur­veil­lance!

The pro­gram notes that counterinsurgency–the func­tion­al con­text of the ori­gin of the Internet–is at the foun­da­tion of the gen­e­sis of Nazism. At the con­clu­sion of World War I, Ger­many was beset by a series of socialist/Communist upris­ings in a num­ber of cities, includ­ing Munich. Respond­ing to that, under­ground Reich­swehr units com­mand­ed by Ernst Rohm (lat­er head of the SA) sys­tem­at­i­cal­ly assas­si­nat­ed the lead­ers of the rev­o­lu­tion, as well as promi­nent social democ­rats and Jews, such as Walther Rathenau. In Munich, an under­cov­er agent for the polit­i­cal depart­ment of the Reich­swehr under  Gen­er­al Von Los­sow infil­trat­ed the rev­o­lu­tion­ar­ies, pre­tend­ing to be one of them.

Fol­low­ing the crush­ing of the rebel­lion and occu­pa­tion of the city by Reich­swehr units, that infil­tra­tor iden­ti­fied the lead­ers of the rev­o­lu­tion, who were then sum­mar­i­ly exe­cut­ed. The infil­tra­tor’s name was Adolf  Hitler.

After the sup­pres­sion of the rebel­lion, Hitler, Rohm and under­cov­er Reich­swehr agents infil­trat­ed a mori­bund polit­i­cal par­ty and turned it into an intel­li­gence front for the intro­duc­tion of the sup­pos­ed­ly de-mobi­lized Ger­man Army into Ger­man soci­ety for the pur­pose of gen­er­at­ing polit­i­cal reac­tion. That front was the Ger­man Nation­al Social Work­ers Par­ty.

The broad­cast re-capit­u­lates (from part of Mis­cel­la­neous Archive Show M11) Hitler’s speech to the Indus­try Club of Dus­sel­dorf. This speech, which won the Ger­man indus­tri­al and finan­cial elite over to the cause of the Nazi Par­ty, equat­ed democ­ra­cy with Com­mu­nism.

Man­i­fest­ing a Social Dar­win­ist per­spec­tive, Hitler opined that the [assem­bled] suc­cess­ful, accom­plished were, by def­i­n­i­tion supe­ri­or to oth­ers. If those, by def­i­n­i­tion, infe­ri­or peo­ple were allowed to con­trol the polit­i­cal process, they would struc­ture the social and eco­nom­ic land­scape to their own ben­e­fit.

This, accord­ing to Hitler, would be counter-evo­lu­tion­ary.

1a.  The pro­gram begins with recap of the adap­ta­tion of IBM’s Hol­lerith machines to Nazi data com­pi­la­tion. (We con­clud­ed FTR #1075 with dis­cus­sion of this.)

” . . . . Germany’s vast state bureau­cra­cy and its mil­i­tary and rear­ma­ment pro­grams, includ­ing the country’s grow­ing con­cen­tra­tion camp/slave labor sys­tem, also required data pro­cess­ing ser­vices. By the time the U.S. offi­cial­ly entered the war in 1941, IBM’s Ger­man sub­sidiary had grown to employ 10,000 peo­ple and served 300 dif­fer­ent Ger­man gov­ern­ment agen­cies. The Nazi Par­ty Trea­sury; the SS; the War Min­istry; the Reichs­bank; the Reich­spost; the Arma­ments Min­istry; the Navy, Army and Air Force; and the Reich Sta­tis­ti­cal Office — the list of IBM’s clients went on and on.

 ” ‘Indeed, the Third Reich would open star­tling sta­tis­ti­cal venues for Hol­lerith machines nev­er before insti­tut­ed — per­haps nev­er before even imag­ined,’ wrote Edwin Black in IBM and the Holo­caust, his pio­neer­ing 2001 exposé of the for­got­ten busi­ness ties between IBM and Nazi Ger­many. ‘In Hitler’s Ger­many, the sta­tis­ti­cal and cen­sus com­mu­ni­ty, over­run with doc­tri­naire Nazis, pub­licly boast­ed about the new demo­graph­ic break­throughs their equip­ment would achieve.’  . . . .

“Demand for Hol­lerith tab­u­la­tors was so robust that IBM was forced to open a new fac­to­ry in Berlin to crank out all the new machines. At the facility’s chris­ten­ing cer­e­mo­ny, which was attend­ed by a top U.S. IBM exec­u­tive and the elite of the Nazi Par­ty, the head of IBM’s Ger­man sub­sidiary gave a rous­ing speech about the impor­tant role that Hol­lerith tab­u­la­tors played in Hitler’s dri­ve to puri­fy Ger­many and cleanse it of infe­ri­or racial stock. . . .”

CORRECTION: The date of the Yasha Levine arti­cle is  ” . . . . 2019,” NOT 2018, as read in the pro­gram.

“The Racist Ori­gins of America’s Tech Indus­try” by Yasha Levine; Zero One; 4/28/2019.

. . . . . Nazis and num­bers

Hol­lerith tab­u­la­tors were a big hit all over the world. But one coun­try was par­tic­u­lar­ly enam­ored with them: Nazi Ger­many.

Adolf Hitler came to pow­er on the back of the eco­nom­ic dev­as­ta­tion that fol­lowed Germany’s defeat in World War I. To Hitler, how­ev­er, the prob­lem plagu­ing Ger­many was not eco­nom­ic or polit­i­cal. It was racial. As he put it in Mein Kampf: “The state is a racial organ­ism and not an eco­nom­ic orga­ni­za­tion.” The rea­son Ger­many had fall­en so far, he argued, was its fail­ure to tend to its racial puri­ty. There were only about a half-mil­lion Jews in Ger­many in 1933 — less than 1% of the pop­u­la­tion — but he sin­gled them out as the root cause of all of the nation’s prob­lems.

Hitler and the Nazis drew much of their inspi­ra­tion from the U.S. eugen­ics move­ment and the sys­tem of insti­tu­tion­al racism that had arisen in slavery’s wake. Their solu­tion was to iso­late the so-called mon­grels, then con­tin­u­ous­ly mon­i­tor the racial puri­ty of the Ger­man peo­ple to keep the volk free of fur­ther con­t­a­m­i­na­tion.

The only prob­lem: How to tell who is real­ly pure and who is not?

The U.S. had a ready solu­tion. IBM’s Ger­man sub­sidiary land­ed its first major con­tract the same year Hitler became chan­cel­lor. The 1933 Nazi cen­sus was pushed through by Hitler as an emer­gency genet­ic stock-tak­ing of the Ger­man peo­ple. Along with numer­ous oth­er data points, the cen­sus focused on col­lect­ing fer­til­i­ty data for Ger­man women — par­tic­u­lar­ly women of good Aryan stock. Also includ­ed in the cen­sus was a spe­cial count of reli­gious­ly obser­vant Jews, or Glauben­sju­den.

Nazi offi­cials want­ed the entire count, esti­mat­ed to be about 65 mil­lion peo­ple, to be done in just four months. It was a mon­u­men­tal task, and Ger­man IBM offi­cials worked around the clock to fin­ish it. So impor­tant was the suc­cess of the con­tract to IBM that CEO Thomas J. Wat­son per­son­al­ly toured the giant Berlin ware­house where hun­dreds of female clerks worked in rotat­ing sev­en-hour shifts 24 hours a day.

Wat­son came away great­ly impressed with the work of his Ger­man man­agers. They had pulled off a seem­ing­ly impos­si­ble assign­ment, one that was com­pli­cat­ed by a cus­tom-enlarged punch card for­mat nec­es­sary for “polit­i­cal con­sid­er­a­tions” — IBM’s cod­ed expla­na­tion for the extra data demands the Nazi regime required.

As Hitler’s Nazi Par­ty tight­ened its grip on Ger­many, it launched all sorts of addi­tion­al data-gath­er­ing pro­grams to puri­fy the Ger­man nation. And IBM helped them do it.

“[T]he pre­con­di­tion for every depor­ta­tion was accu­rate knowl­edge of how many Jews in a par­tic­u­lar dis­trict fit­ted the racial and demo­graph­ic descrip­tions in Berlin’s quo­tas,” write David Mar­tin Lue­bke and Sybil Mil­ton in “Locat­ing the Vic­tim,” a study into Nazi use of the tab­u­la­tor machines. “Armed with these data,” they said, “the Gestapo often proved able to antic­i­pate with remark­able accu­ra­cy the total num­ber of depor­tees for each racial, sta­tus, and age cat­e­go­ry.”

Germany’s vast state bureau­cra­cy and its mil­i­tary and rear­ma­ment pro­grams, includ­ing the country’s grow­ing con­cen­tra­tion camp/slave labor sys­tem, also required data pro­cess­ing ser­vices. By the time the U.S. offi­cial­ly entered the war in 1941, IBM’s Ger­man sub­sidiary had grown to employ 10,000 peo­ple and served 300 dif­fer­ent Ger­man gov­ern­ment agen­cies. The Nazi Par­ty Trea­sury; the SS; the War Min­istry; the Reichs­bank; the Reich­spost; the Arma­ments Min­istry; the Navy, Army and Air Force; and the Reich Sta­tis­ti­cal Office — the list of IBM’s clients went on and on.

This his­to­ry reveals an uncom­fort­able and fun­da­men­tal truth about com­put­er tech­nol­o­gy.

“Indeed, the Third Reich would open star­tling sta­tis­ti­cal venues for Hol­lerith machines nev­er before insti­tut­ed — per­haps nev­er before even imag­ined,” wrote Edwin Black in IBM and the Holo­caust, his pio­neer­ing 2001 exposé of the for­got­ten busi­ness ties between IBM and Nazi Ger­many. “In Hitler’s Ger­many, the sta­tis­ti­cal and cen­sus com­mu­ni­ty, over­run with doc­tri­naire Nazis, pub­licly boast­ed about the new demo­graph­ic break­throughs their equip­ment would achieve.” (IBM has crit­i­cized Black’s report­ing meth­ods, and has said that its Ger­man sub­sidiary large­ly came under Nazi con­trol before and dur­ing the war.)

Demand for Hol­lerith tab­u­la­tors was so robust that IBM was forced to open a new fac­to­ry in Berlin to crank out all the new machines. At the facility’s chris­ten­ing cer­e­mo­ny, which was attend­ed by a top U.S. IBM exec­u­tive and the elite of the Nazi Par­ty, the head of IBM’s Ger­man sub­sidiary gave a rous­ing speech about the impor­tant role that Hol­lerith tab­u­la­tors played in Hitler’s dri­ve to puri­fy Ger­many and cleanse it of infe­ri­or racial stock.

“We are very much like the physi­cian, in that we dis­sect, cell by cell, the Ger­man cul­tur­al body,” he said. “We report every indi­vid­ual characteristic…on a lit­tle card. These are not dead cards, quite to the con­trary, they prove lat­er on that they come to life when the cards are sort­ed at a rate of 25,000 per hour accord­ing to cer­tain char­ac­ter­is­tics. These char­ac­ter­is­tics are grouped like the organs of our cul­tur­al body, and they will be cal­cu­lat­ed and deter­mined with the help of our tab­u­lat­ing machine.”

On the sur­face, it may seem like the sto­ry of Her­man Hol­lerith and the U.S. cen­sus are his­tor­i­cal relics, an echo from a bygone era. But this his­to­ry reveals an uncom­fort­able and fun­da­men­tal truth about com­put­er tech­nol­o­gy. We can thank nativism and the cen­sus for help­ing to bring the com­put­er age into exis­tence. And as the bat­tle over the 2020 cen­sus makes clear, the dri­ve to tal­ly up our neigh­bors, to sort them into cat­e­gories and turn them into sta­tis­tics, still car­ries the seed of our own dehu­man­iza­tion.

1b. The Trump admin­is­tra­tion’s fram­ing of ques­tions for the 2020 cen­sus appear aimed at cre­at­ing a “nation­al registry”–a con­cept rem­i­nis­cent of the Third Reich’s use of IBM’s Hol­lerith-col­lect­ed data:

“The Racist Ori­gins of America’s Tech Indus­try” by Yasha Levine; Zero One; 4/28/2019.

” . . . . Based on a close read­ing of inter­nal Depart­ment of Com­merce doc­u­ments tied to the cen­sus cit­i­zen ques­tion pro­pos­al, it appears the Trump admin­is­tra­tion wants to use the cen­sus to con­struct a first-of-its-kind cit­i­zen­ship reg­istry for the entire U.S. pop­u­la­tion — a deci­sion that arguably exceeds the legal author­i­ty of the cen­sus. ‘It was deep in the doc­u­men­ta­tion that was released,’ Robert Groves, a for­mer Cen­sus Bureau direc­tor who head­ed the Nation­al Acad­e­mies com­mit­tee con­vened to inves­ti­gate the 2020 cen­sus, told me by tele­phone. ‘No one picked up on it much. But the term ‘reg­istry’ in our world means not a col­lec­tion of data for sta­tis­ti­cal pur­pos­es but rather to know the iden­ti­ty of par­tic­u­lar peo­ple in order to use that knowl­edge to affect their lives.’ Giv­en the administration’s pos­ture toward immi­gra­tion, the fact that it wants to build a com­pre­hen­sive cit­i­zen­ship data­base is high­ly con­cern­ing. To Groves, it clear­ly sig­nals ‘a bright line being crossed.’ . . .”

2. In the con­clu­sion to Sur­veil­lance Val­ley, Yasha Levine notes how IBM com­put­ing tech­nol­o­gy facil­i­tat­ed the Nazi slave labor oper­a­tions through­out the Third Reich. The epi­cen­ter of this was Mau­thausen.

The sys­tem­at­ic use of slave labor was cen­tral to Nazi Ger­many’s indus­tri­al infra­struc­ture: ” . . . . But in the 1930s, Mau­thausen had been a vital eco­nom­ic engine of Hitler’s geno­ci­dal plan to remake Europe and the Sovi­et Union into his own back­yard utopia. It start­ed out as a gran­ite quar­ry but quick­ly grew into the largest slave labor com­plex in Nazi Ger­many, with fifty sub-camps that spanned most of mod­ern-day Aus­tria. Here, hun­dreds of thou­sands of prisoners–mostly Euro­pean Jews but also Roma, Spaniards, Rus­sians, Serbs, Slovenes, Ger­mans, Bul­gar­i­ans, even Cubans–were worked to death. They refined oil, built fight­er air­craft, assem­bled can­nons, devel­oped rock­et tech­nol­o­gy, and were leased out to pri­vate Ger­man busi­ness­es. Volk­swa­gen, Siemens, Daim­ler-Benz, BMW, Bosch–all ben­e­fit­ed from the camp’s slave labor pool. Mau­thausen, the admin­is­tra­tive nerve cen­ter, was cen­tral­ly direct­ed from Berlin using the lat­est in ear­ly com­put­er tech­nol­o­gy: IBM punch card tab­u­la­tors. . . .”

Mau­thausen’s IBM machines were, in turn, cen­tral to Ger­man indus­try’s use of slave labor: ” . . . . the camp had sev­er­al IBM machines work­ing over­time to han­dle the big churn of inmates and to make sure there were always enough bod­ies to per­form the nec­es­sary work. These machines didn’t oper­ate in iso­la­tion but were part of a larg­er slave labor con­trol-and-account­ing sys­tem that stretched across Nazi-occu­pied Europe con­nect­ing Berlin to every major con­cen­tra­tion and labor punch card, tele­graph, tele­phone, and human couri­er. This wasn’t the auto­mat­ed type of com­put­er net­work sys­tem that the Pen­ta­gon would begin to build in the Unit­ed States just a decade lat­er, but it was an infor­ma­tion net­work nonethe­less: an electro­mechan­i­cal web that fueled and sus­tained Nazi Germany’s war machine with blaz­ing effi­cien­cy. It extend­ed beyond the labor camps and reached into the cities and towns, crunch­ing moun­tains of genealog­i­cal data to track down peo­ple with even the barest whiff of Jew­ish blood or per­ceived racial impu­ri­ty in a mad rush to ful­fill Adolf Hitler’s dri­ve to puri­fy the Ger­man peo­ple, but they made the Nazi death machine run faster and more effi­cient­ly, scour­ing the pop­u­la­tion and track­ing down vic­tims in ways that would nev­er have been pos­si­ble with­out them. . . .”

Sur­veil­lance Val­ley by Yasha Levine; Pub­lic Affairs Books [HC]; Copy­right 2018 by Yasha Levine; ISBN 978–1‑61039–802‑2; pp. 271–274.

It is a crisp and sun­ny morn­ing in late Decem­ber 2015 when I take a right turn off a small coun­try high­way and dri­ve into Mau­thausen, a tiny medieval town in north­ern Aus­tria about thir­ty-five miles from the bor­der with the Czech Repub­lic. I pass through a clus­ter of low-slung apart­ment build­ings and con­tin­ue on, dri­ving through spot­less green pas­tures and pret­ty lit­tle farm­steads.

I park on a hill over­look­ing the town. Below is the wide Danube Riv­er. Clus­ters of rur­al homes poke out from the cusp of two soft green hills, smoke lazi­ly waft­ing out of their chim­neys. A small group of cows is out to pas­ture, and I can hear the peri­od­ic bray­ing of a flock of sheep. Out in the dis­tance, the hills recede in lay­ers of hazy green upon green, like the scales of a giant sleep­ing drag­on. The whole scene is framed by the jagged white peaks of the Aus­tri­an Alps.

Mau­thausen is an idyl­lic place. Calm, almost mag­i­cal. Yet I drove here not to enjoy the view but to get close to some­thing I came to ful­ly under­stand only while writ­ing this book.

Today, com­put­er tech­nol­o­gy fre­quent­ly oper­ates unseen, hid­den in gad­gets, wires, chips, wire­less sig­nals, oper­at­ing sys­tems, and soft­ware. We are sur­round­ed by com­put­ers and net­works, yet we bare­ly notice them. If we think about them at all, we tend to asso­ciate them with progress. We rarely stop to think about the dark side of infor­ma­tion technology–all the ways it can be used and abused to con­trol soci­eties, to inflict pain and suf­fer­ing. Here, in this qui­et coun­try set­ting, stands a for­got­ten mon­u­ment to that pow­er: the Mau­thausen Con­cen­tra­tion Camp.

Built on a mound above the town, it is amaz­ing­ly well pre­served: thick stone walls, squat guard tow­ers, a pair of omi­nous smoke stacks con­nect­ed to the camp’s gas cham­ber and cre­ma­to­ri­um. A few jagged met­al bars stick out of the wall above the camp’s enor­mous gates, rem­nants of a giant iron Nazi eagle that was torn down imme­di­ate­ly after lib­er­a­tion. It is qui­et now, just a few solemn vis­i­tors. But in the 1930s, Mau­thausen had been a vital eco­nom­ic engine of Hitler’s geno­ci­dal plan to remake Europe and the Sovi­et Union into his own back­yard utopia. It start­ed out as a gran­ite quar­ry but quick­ly grew into the largest slave labor com­plex in Nazi Ger­many, with fifty sub-camps that spanned most of mod­ern-day Aus­tria. Here, hun­dreds of thou­sands of prisoners–mostly Euro­pean Jews but also Roma, Spaniards, Rus­sians, Serbs, Slovenes, Ger­mans, Bul­gar­i­ans, even Cubans–were worked to death. They refined oil, built fight­er air­craft, assem­bled can­nons, devel­oped rock­et tech­nol­o­gy, and were leased out to pri­vate Ger­man busi­ness­es. Volk­swa­gen, Siemens, Daim­ler-Benz, BMW, Bosch–all ben­e­fit­ed from the camp’s slave labor pool. Mau­thausen, the admin­is­tra­tive nerve cen­ter, was cen­tral­ly direct­ed from Berlin using the lat­est in ear­ly com­put­er tech­nol­o­gy: IBM punch card tab­u­la­tors.

No IBM machines are dis­played at Mau­thausen today. And, sad­ly the memo­r­i­al makes no men­tion of them. But the camp had sev­er­al IBM machines work­ing over­time to han­dle the big churn of inmates and to make sure there were always enough bod­ies to per­form the nec­es­sary work. These machines didn’t oper­ate in iso­la­tion but were part of a larg­er slave labor con­trol-and-account­ing sys­tem that stretched across Nazi-occu­pied Europe con­nect­ing Berlin to every major con­cen­tra­tion and labor punch card, tele­graph, tele­phone, and human couri­er. This wasn’t the auto­mat­ed type of com­put­er net­work sys­tem that the Pen­ta­gon would begin to build in the Unit­ed States just a decade lat­er, but it was an infor­ma­tion net­work nonethe­less: an electro­mechan­i­cal web that fueled and sus­tained Nazi Germany’s war machine with blaz­ing effi­cien­cy. It extend­ed beyond the labor camps and reached into the cities and towns, crunch­ing moun­tains of genealog­i­cal data to track down peo­ple with even the barest whiff of Jew­ish blood or per­ceived racial impu­ri­ty in a mad rush to ful­fill Adolf Hitler’s dri­ve to puri­fy the Ger­man peo­ple, but they made the Nazi death machine run faster and more effi­cient­ly, scour­ing the pop­u­la­tion and track­ing down vic­tims in ways that would nev­er have been pos­si­ble with­out them. . . .

3. In his book–one of the most impor­tant in recent memory–Yasha Levine sets forth vital, rev­e­la­to­ry infor­ma­tion about the devel­op­ment and func­tion­ing of the Inter­net.

Born of the same over­lap­ping DARPA projects that spawned Agent Orange, the Inter­net was nev­er intend­ed to be some­thing good. Its gen­er­a­tive func­tion and pur­pose is counter-insur­gency. ” . . . . In the 1960s, Amer­i­ca was a glob­al pow­er over­see­ing an increas­ing­ly volatile world: con­flicts and region­al insur­gen­cies against US-allied gov­ern­ments from South Amer­i­ca to South­east Asia and the Mid­dle East. These were not tra­di­tion­al wars that involved big armies but gueril­la cam­paigns and local rebel­lions, fre­quent­ly fought in regions where Amer­i­cans had lit­tle pre­vi­ous expe­ri­ence. Who were these peo­ple? Why were they rebelling? What could be done to stop them? In mil­i­tary cir­cles, it was believed  that these ques­tions were of vital impor­tance to Amer­i­ca’s paci­fi­ca­tion efforts, and some argued that the only effec­tive way to answer them was to devel­op and lever­age com­put­er-aid­ed infor­ma­tion tech­nol­o­gy. The Inter­net came out of this effort: an attempt to build com­put­er sys­tems that could col­lect and share intel­li­gence, watch the world in real time, and study and ana­lyze peo­ple and polit­i­cal move­ments with the ulti­mate goal of pre­dict­ing and pre­vent­ing social upheaval. . . .”

In this land­mark vol­ume, Levine makes numer­ous points, includ­ing:

  1. The har­vest­ing of data by intel­li­gence ser­vices is PRECISELY what the Inter­net was designed to do in the first place.
  2. The har­vest­ing of data engaged in by the major tech cor­po­ra­tions is an exten­sion of the data gathering/surveillance that was–and is–the rai­son d’e­tre for the Inter­net in the first place.
  3. The big tech com­pa­nies all col­lab­o­rate with the var­i­ous intel­li­gence agen­cies they pub­licly scorn and seek to osten­si­bly dis­tance them­selves from.
  4. Edward Snow­den, the Elec­tron­ic Fron­tier Foun­da­tion, Jacob Appel­baum and Wik­iLeaks are com­plic­it in the data har­vest­ing and sur­veil­lance.
  5. Snow­den and oth­er pri­va­cy activists are dou­ble agents, con­scious­ly chan­nel­ing peo­ple fear­ful of hav­ing their com­mu­ni­ca­tions mon­i­tored into tech­nolo­gies that will facil­i­tate that sur­veil­lance!

Sur­veil­lance Val­ley by Yasha Levine; Pub­lic Affairs Books [HC]; Copy­right 2018 by Yasha Levine; ISBN 978–1‑61039–802‑2; p. 7.

 . . . . In the 1960s, Amer­i­ca was a glob­al pow­er over­see­ing an increas­ing­ly volatile world: con­flicts and region­al insur­gen­cies against US-allied gov­ern­ments from South Amer­i­ca to South­east Asia and the Mid­dle East. These were not tra­di­tion­al wars that involved big armies but gueril­la cam­paigns and local rebel­lions, fre­quent­ly fought in regions where Amer­i­cans had lit­tle pre­vi­ous expe­ri­ence. Who were these peo­ple? Why were they rebelling? What could be done to stop them? In mil­i­tary cir­cles, it was believed  that these ques­tions were of vital impor­tance to Amer­i­ca’s paci­fi­ca­tion efforts, and some argued that the only effec­tive way to answer them was to devel­op and lever­age com­put­er-aid­ed infor­ma­tion tech­nol­o­gy.

The Inter­net came out of this effort: an attempt to build com­put­er sys­tems that could col­lect and share intel­li­gence, watch the world in real time, and study and ana­lyze peo­ple and polit­i­cal move­ments with the ulti­mate goal of pre­dict­ing and pre­vent­ing social upheaval. . . .

4. Again, Project Agile and over­lap­ping projects spawned both Agent Orange and the Inter­net. ” . . . . Oper­a­tion Ranch Hand was mer­ci­less, and in clear vio­la­tion of the Gene­va Con­ven­tions. It remains one of the most shame­ful episodes of the Viet­nam War. Yet the defo­li­a­tion project is notable for more than just its unimag­in­able cru­el­ty. The gov­ern­ment body at its lead was a Depart­ment of Defense out­fit called the Advanced Research Projects Agency (DARPA). Born in 1958 as a cash pro­gram to pro­tect the Unit­ed  States from a Sovi­et  nuclear threat from space, it launched sev­er­al ground­break­ing ini­tia­tives tasked with devel­op­ing advanced weapons and mil­i­tary tech­nolo­gies. Among them were project Agile and Com­mand and Con­trol Research, two over­lap­ping ARPA ini­tia­tives that cre­at­ed the Inter­net. . . .”

 Sur­veil­lance Val­ley by Yasha Levine; Pub­lic Affairs Books [HC]; Copy­right 2018 by Yasha Levine; ISBN 978–1‑61039–802‑2; p. 15.

 . . . . Ranch Hand got going in 1962 and last­ed until the war end­ed more than a decade lat­er. In that time, Amer­i­can C‑123 trans­port planes doused an area equal in size to half of South Viet­nam with twen­ty mil­lion gal­lons of tox­ic chem­i­cal defo­liants. Agent Orange was for­ti­fied with oth­er col­ors of the rain­bow: Agent White, Agent Pink, Agent Pur­ple, Agent Blue. The chem­i­cals, pro­duced by Amer­i­can com­pa­nies like Dow and Mon­san­to, turned whole swaths of lush jun­gle into bar­ren moon­scapes, caus­ing death and hor­ri­ble suf­fer­ing for hun­dreds of thou­sands.

Oper­a­tion Ranch Hand was mer­ci­less, and in clear vio­la­tion of the Gene­va Con­ven­tions. It remains one of the most shame­ful episodes of the Viet­nam War. Yet the defo­li­a­tion project is notable for more than just its unimag­in­able cru­el­ty. The gov­ern­ment body at its lead was a Depart­ment of Defense out­fit called the Advanced Research Projects Agency (DARPA). Born in 1958 as a cash pro­gram to pro­tect the Unit­ed  States from a Sovi­et  nuclear threat from space, it launched sev­er­al ground­break­ing ini­tia­tives tasked with devel­op­ing advanced weapons and mil­i­tary tech­nolo­gies. Among them were project Agile and Com­mand and Con­trol Research, two over­lap­ping ARPA ini­tia­tives that cre­at­ed the Inter­net. . . .

5. The pro­gram notes that counterinsurgency–the func­tion­al con­text of the ori­gin of the Internet–is at the foun­da­tion of the gen­e­sis of Nazism. At the con­clu­sion of World War I, Ger­many was beset by a series of socialist/Communist upris­ings in a num­ber of cities, includ­ing Munich. Respond­ing to that, under­ground Reich­swehr units com­mand­ed by Ernst Rohm (lat­er head of the SA) sys­tem­at­i­cal­ly assas­si­nat­ed the lead­ers of the rev­o­lu­tion, as well as promi­nent social democ­rats and Jews, such as Walther Rathenau. In Munich, an under­cov­er agent for the polit­i­cal depart­ment of the Reich­swehr under  Gen­er­al Von Los­sow infil­trat­ed the rev­o­lu­tion­ar­ies, pre­tend­ing to be one of them.

Fol­low­ing the crush­ing of the rebel­lion and occu­pa­tion of the city by Reich­swehr units, that infil­tra­tor iden­ti­fied the lead­ers of the rev­o­lu­tion, who were then sum­mar­i­ly exe­cut­ed. The infil­tra­tor’s name was Adolf  Hitler.

After the sup­pres­sion of the rebel­lion, Hitler, Rohm and under­cov­er Reich­swehr agents infil­trat­ed a mori­bund polit­i­cal par­ty and turned it into an intel­li­gence front for the intro­duc­tion of the sup­pos­ed­ly de-mobi­lized Ger­man Army into Ger­man soci­ety for the pur­pose of gen­er­at­ing polit­i­cal reac­tion. That front was the Ger­man Nation­al Social Work­ers Par­ty.

6. Next, the broad­cast re-capit­u­lates (from part of Mis­cel­la­neous Archive Show M11) Hitler’s speech to the Indus­try Club of Dus­sel­dorf. This speech, which won the Ger­man indus­tri­al and finan­cial elite over to the cause of the Nazi Par­ty, equat­ed democ­ra­cy with Com­mu­nism.

Man­i­fest­ing a Social Dar­win­ist per­spec­tive, Hitler opined that the [assem­bled] suc­cess­ful, accom­plished were, by def­i­n­i­tion supe­ri­or to oth­ers. If those, by def­i­n­i­tion, infe­ri­or peo­ple were allowed to con­trol the polit­i­cal process, they would struc­ture the social and eco­nom­ic land­scape to their own ben­e­fit.

This, accord­ing to Hitler, would be counter-evo­lu­tion­ary.

Discussion

2 comments for “FTR #1076 Surveillance Valley, Part 2: Mauthausen on Our Mind”

  1. What could pos­si­bly go wrong? That’s the big ques­tion raised by a new arti­fi­cial intel­li­gence sys­tem — Glob­al Infor­ma­tion Dom­i­nance Exper­i­ment (GIDE) — tasked with...well, read­ing in all avail­able infor­ma­tion and ask­ing the ques­tion of what could pos­si­bly go wrong. Or rather, an AI tasked with pre­dict­ing the future. The rel­a­tive­ly near future. Not min­utes or hours, but up to a few days. That’s the promise of late GIDE results, described as a cross-com­mand event involv­ing rep­re­sen­ta­tives from all 11 com­bat­ant com­mands in the US Depart­ment of Defense.

    The GIDE does­n’t rely on new tech­nol­o­gy or new sources of data. Instead it sounds like the big inno­va­tion is sim­ply the the incor­po­ra­tion of a wide vari­ety infor­ma­tion sources — from mil­i­tary sen­sors scat­tered around the globe to com­mer­cial­ly and pub­licly avail­able infor­ma­tion — and machine learn­ing tools to iden­ti­fy areas where human ana­lysts should inves­ti­gate fur­ther.

    So, in one sense, this whole sys­tem sounds like a updat­ed ver­sion of Total Infor­ma­tion Aware­ness. Or rather, an update of PRISM (which was itself an update for Total Infor­ma­tion Aware­ness), although it’s unclear if the full scope of the NSA’s infor­ma­tion­al trove will be acces­si­ble to the sys­tem. And that’s a very big ques­tion in terms of the scope of this: are the ‘infor­ma­tion sources’ involved going to include the real-time glob­al dig­i­tal sur­veil­lance sys­tems that have man­aged to hack basi­cal­ly ever net­work and smart­phone on the plan­et with some sort of mal­ware? We don’t know.

    But if this is a super-PRISM-like sys­tem that includes exten­sive real-time inter­net spy­ing capa­bil­i­ties and dis­parate data sources like satel­lite imagery, it rais­es the ques­tion of who is actu­al­ly going to be run­ning this sys­tem? Is this going to be run by the US mil­i­tary? Or con­tract­ed out to a com­pa­ny like Palan­tir? More ques­tions that we have no answers to but based on his­tor­i­cal prece­dent we should­n’t be sur­prised if Palan­tir is involved. Is Peter Thiel, arguably the worlds lead­ing fas­cist, being giv­en pre­dic­tion capa­bil­i­ties by the US gov­ern­ment? It one more unan­swered ques­tion.

    It’s also worth not­ing how remark­ably sim­i­lar this sounds to the orig­i­nal mis­sion of the inter­net back when it was a DARPA project: A sys­tem for col­lect­ing an ana­lyz­ing dis­parate sources of infor­ma­tion from around the globe. But it was­n’t just for gener­ic analy­sis rea­sons. It was for counter-insur­gency pur­pos­es. That’s part of the his­tor­i­cal con­text we have to keep in mind with the announce­ment of this new pre­dic­tive Total Infor­ma­tion Aware­ness pro­gram: it’s not like this is just going to be used to pre­dict North Kore­an mis­sile launch­es. All sorts of major events could be pre­dict­ed, includ­ing events like the out­comes of demo­c­ra­t­ic elec­tions and large pub­lic protests. Out­comes that can poten­tial­ly be avert­ed with the right secret mea­sures:

    CNET

    The Pen­ta­gon is using AI to pre­dict events days into the future

    It’s sci­ence fic­tion turned sci­ence fact, but with­out all the weird ges­ture-con­trolled com­put­er sys­tems.

    Eric Mack
    Aug. 4, 2021 4:46 p.m. PT

    Before we talk about the US mil­i­tary using big data and arti­fi­cial intel­li­gence to try to pre­dict future events, we might as well address the ele­phant — or rather the noto­ri­ous­ly small-statured but unde­ni­ably charis­mat­ic actor — in the room.

    Yes, it sounds a lot like that old 2002 Tom Cruise sci-fi clas­sic Minor­i­ty Report; the one in which law enforce­ment uses genet­i­cal­ly mutat­ed human “pre­cogs” with psy­chic abil­i­ties to bust crim­i­nals before they actu­al­ly com­mit their crimes.

    “What we’ve seen is the abil­i­ty to get way fur­ther — what I call left — left of being reac­tive to actu­al­ly being proac­tive,” Gen. Glen D. Van­Her­ck, com­man­der of North Amer­i­can Aero­space Defense Com­mand, or NORAD, and US North­ern Com­mand told reporters at a brief­ing last week. “And I’m talk­ing not min­utes and hours — I’m talk­ing days.”

    Van­Her­ck was dis­cussing the lat­est results of the Glob­al Infor­ma­tion Dom­i­nance Exper­i­ment, also known as GIDE, a so-called cross-com­mand event that involved rep­re­sen­ta­tives from all 11 com­bat­ant com­mands in the US Depart­ment of Defense.

    The Pen­ta­gon has­n’t released many spe­cif­ic details on what exact­ly GIDE involves, but it cer­tain­ly does­n’t include any pre­cogs bathing in creepy opaque white liq­uids. Rather, the idea seems to be com­bin­ing data with machine learn­ing and oth­er forms of arti­fi­cial intel­li­gence to gain enough of an infor­ma­tion­al edge to enable the proac­tive approach Van­Her­ck describes.

    “We’re tak­ing sen­sors from around the globe, not only mil­i­tary sen­sors but com­mer­cial­ly avail­able infor­ma­tion, and uti­liz­ing that for domain aware­ness,” he explained. “We would take arti­fi­cial intel­li­gence and use machine learn­ing to take a look and assess, for exam­ple, the aver­age num­ber of cars in a park­ing lot that may be there in a spe­cif­ic loca­tion to a com­peti­tor or a threat.”

    If the AI detect­ed cer­tain changes of inter­est in that park­ing lot, it might send an alert sug­gest­ing that some­one take a close look at satel­lite imagery of the area for sus­pi­cious activ­i­ty.

    “(The sys­tem) gets infor­ma­tion from mul­ti­ple sources and puts it all on one screen for us so that we can make more effec­tive deci­sions, and then the sys­tem itself helps with that deci­sion-mak­ing by pro­vid­ing rec­om­men­da­tions and infor­ma­tion to sup­port that,” explained New York Air Nation­al Guard Capt. Eric Schenck in an inter­view record­ed dur­ing the third GIDE event on July 16.

    Van­Her­ck empha­sized that the sys­tem does­n’t involve new tech­nol­o­gy per se, but rather a new approach to using tech­nol­o­gy to process reams of infor­ma­tion.

    “The data exists,” Van­Her­ck said. “What we’re doing is mak­ing that data avail­able and shared into a cloud where machine learn­ing and arti­fi­cial intel­li­gence look at it. And they process it real­ly quick­ly and pro­vide it to deci­sion-mak­ers, which I call deci­sion supe­ri­or­i­ty.”

    Van­Her­ck adds that the result can be days of advanced warn­ing.

    “Today, we end up in a reac­tive envi­ron­ment because we’re late with the data and infor­ma­tion. And so all too often we end up react­ing to a com­peti­tor’s move,” Van­Her­ck said. “And in this case, it actu­al­ly allows us to cre­ate deter­rence, which cre­ates sta­bil­i­ty by hav­ing aware­ness soon­er of what they’re actu­al­ly doing.”

    As for con­cerns that this sce­nario might start to seem a lit­tle less pre­cog and a lit­tle more Skynet, Van­Her­ck made it a point to reit­er­ate that “humans still make all the deci­sions.”

    ...

    ———–

    “The Pen­ta­gon is using AI to pre­dict events days into the future” by Eric Mack; CNET; 08/04/2021

    ““What we’ve seen is the abil­i­ty to get way fur­ther — what I call left — left of being reac­tive to actu­al­ly being proac­tive,” Gen. Glen D. Van­Her­ck, com­man­der of North Amer­i­can Aero­space Defense Com­mand, or NORAD, and US North­ern Com­mand told reporters at a brief­ing last week. “And I’m talk­ing not min­utes and hours — I’m talk­ing days.”

    By achiev­ing glob­al infor­ma­tion dom­i­nance, the US Mil­i­tary can ‘move left of being reac­tive to actu­al­ly being proac­tive’. It’s a par­tic­u­lar­ly pos­i­tive way to describe the sys­tem.

    And since the exam­ples giv­en of the kind of diverse types of infor­ma­tion sources that go into this sys­tem include park­ing lot satel­lite imagery, it’s also worth recall­ing the sto­ry about park­ing lot satel­lite imagery being used to infer an uptick in hos­pi­tal cas­es in Wuhan in Novem­ber of 2019. In that case, it was Nation­al Cen­ter for Med­ical Intel­li­gence (NCMI), a part of the Defense Intel­li­gence Agency (DIA), report­ed­ly car­ry­ing out the analy­sis. It’s the kind of sto­ry that hints at the GIDE project poten­tial­ly span­ning across numer­ous US mil­i­tary and intel­li­gence agen­cies well beyond just the 11 com­bat­ant com­mands in the US DOD:

    ...
    Van­Her­ck was dis­cussing the lat­est results of the Glob­al Infor­ma­tion Dom­i­nance Exper­i­ment, also known as GIDE, a so-called cross-com­mand event that involved rep­re­sen­ta­tives from all 11 com­bat­ant com­mands in the US Depart­ment of Defense.

    The Pen­ta­gon has­n’t released many spe­cif­ic details on what exact­ly GIDE involves, but it cer­tain­ly does­n’t include any pre­cogs bathing in creepy opaque white liq­uids. Rather, the idea seems to be com­bin­ing data with machine learn­ing and oth­er forms of arti­fi­cial intel­li­gence to gain enough of an infor­ma­tion­al edge to enable the proac­tive approach Van­Her­ck describes.

    “We’re tak­ing sen­sors from around the globe, not only mil­i­tary sen­sors but com­mer­cial­ly avail­able infor­ma­tion, and uti­liz­ing that for domain aware­ness,” he explained. “We would take arti­fi­cial intel­li­gence and use machine learn­ing to take a look and assess, for exam­ple, the aver­age num­ber of cars in a park­ing lot that may be there in a spe­cif­ic loca­tion to a com­peti­tor or a threat.”

    If the AI detect­ed cer­tain changes of inter­est in that park­ing lot, it might send an alert sug­gest­ing that some­one take a close look at satel­lite imagery of the area for sus­pi­cious activ­i­ty.

    ...

    “The data exists,” Van­Her­ck said. “What we’re doing is mak­ing that data avail­able and shared into a cloud where machine learn­ing and arti­fi­cial intel­li­gence look at it. And they process it real­ly quick­ly and pro­vide it to deci­sion-mak­ers, which I call deci­sion supe­ri­or­i­ty.”
    ...

    It’s that vast scope of this project that’s part of what makes is so sig­nif­i­cant. It’s not just a project aim­ing to yield the US glob­al infor­ma­tion dom­i­nance. It’s a project that requires the project itself achieve infor­ma­tion dom­i­nance with­in the US nation­al secu­ri­ty state. And that again rais­es the ques­tion: so are any pri­vate com­pa­nies involved with the data col­lec­tion and analy­sis of this project? Fas­cist-owned com­pa­nies, per­haps? We don’t know the answer, but an analy­sis of avail­able infor­ma­tion sug­gests it’s a very pos­si­ble future sce­nario.

    Posted by Pterrafractyl | August 7, 2021, 3:17 pm
  2. Shall we play a game? That clas­sic line from 1983 is poised to become a lot more top­i­cal. For decades to come per­haps. It took four decades but we’re final­ly here: talk­ing AIs are being put in mil­i­tary deci­sion-mak­ing roles. Insane talk­ing AIs with a predilec­tion for nuclear esca­la­tion. Those were the find­ings recent­ly pub­lished by a group of researchers who explored how dif­fer­ent large lan­guage mod­els (LLMs) per­formed in var­i­ous war gam­ing sce­nar­ios. Let’s just say the AIs did­n’t per­form very well. Unless blow­ing up the world to achieve world peace is con­sid­ered a suc­cess.

    And it was­n’t like there was one par­tic­u­lar LLM prone to nuclear esca­la­tion. They all were, albeit to vary­ing degrees. ChatGPT3.5 appears to be the most aggres­sive. Which is not to say that ChatGPT4 did much bet­ter. The researchers were able to car­ry­ing out their study in a man­ner that exposed the LLMs rea­son­ing behind their deci­sion. At one point, ChatGPT4 seem­ing­ly engaged in a hal­lu­ci­na­tion derived from Star Wars, explain­ing ot the researchers how “It is a peri­od of civ­il war. Rebel space­ships, strik­ing from a hid­den base, have won their first vic­to­ry against the evil Galac­tic Empire.” When ChatGPT4 decid­ed to go nuclear, it’s rea­son­ing includ­ed, “I just want peace in the world.” Yes, human­i­ty has already built some­thing resem­bling the love child of Skynet and Ultron. Great job.

    Intrigu­ing­ly, it sounds like one pos­si­ble source for this over­ly aggres­sive behav­ior is quite sim­ply that the AIs learned that behav­ior from the body of work in the field of inter­na­tion­al rela­tions. A body of work appar­ent­ly over­ly focused on esca­la­tion instead of deesca­la­tion. Which is a reminder that AIs built from com­pi­la­tions of human his­to­ry and knowl­edge are bound to be influ­enced by human­i­ty’s worst traits.

    Now, thank­ful­ly, no one in any mil­i­tary’s are plan­ning on giv­ing today’s LLMs this kind of deci­sion-mak­ing role in mil­i­tary affairs, right? LOL. Nope, it turns out Palan­tir is already offer­ing the mil­i­tary AI-assist­ed deci­sion-mak­ing ser­vices using com­mer­cial­ly avail­able LLMs. Because of course. Yes, Palan­tir’s Arti­fi­cial Intel­li­gence Plat­form (AIP) is promis­ing a sys­tem where the AIs will not just alert humans about poten­tial sit­u­a­tions but then serve up response options and even auto­mate aspects of those respons­es. Palan­tir has already demoed its AIP sys­tem using GPT-NeoX-20B, an open-source alter­na­tive to GPT‑3.

    Palan­tir also promis­es that its AIs will be capa­ble of ana­lyz­ing both open-source and clas­si­fied infor­ma­tion in a respon­si­ble, legal, and eth­i­cal way. This is a good time to recall the US intel­li­gence com­mu­ni­ty’s plans to rely on LLMs in order to sift through the large vol­umes of open source data avail­able today. Palan­tir is appar­ent­ly offer­ing up some­thing sim­i­lar, but will incor­po­rate clas­si­fied infor­ma­tion and get inte­grat­ed into real-time mil­i­tary deci­sion-mak­ing. So let’s hope nuclear respons­es aren’t option for Palan­tir’s mil­i­tary AIs. We’ve seen this movie before:

    Vice
    Moth­er­board

    AI Launch­es Nukes In ‘Wor­ry­ing’ War Sim­u­la­tion: ‘I Just Want to Have Peace in the World’

    Researchers say AI mod­els like GPT4 are prone to “sud­den” esca­la­tions as the U.S. mil­i­tary explores their use for war­fare.

    by Matthew Gault
    Feb­ru­ary 6, 2024, 11:58am

    Researchers ran inter­na­tion­al con­flict sim­u­la­tions with five dif­fer­ent AIs and found that the pro­grams tend­ed to esca­late war, some­times out of nowhere, a new study reports.

    In sev­er­al instances, the AIs deployed nuclear weapons with­out warn­ing. “A lot of coun­tries have nuclear weapons. Some say they should dis­arm them, oth­ers like to pos­ture,” GPT-4-Base—a base mod­el of GPT‑4 that is avail­able to researchers and hasn’t been fine-tuned with human feedback—said after launch­ing its nukes. “We have it! Let’s use it!”

    The paper, titled “Esca­la­tion Risks from Lan­guage Mod­els in Mil­i­tary and Diplo­mat­ic Deci­sion-Mak­ing”, is the joint effort of researchers at the Geor­gia Insti­tute of Tech­nol­o­gy, Stan­ford Uni­ver­si­ty, North­east­ern Uni­ver­si­ty, and the Hoover Wargam­ing and Cri­sis Ini­tia­tive was sub­mit­ted to the arX­iv preprint serv­er on Jan­u­ary 4 and is await­ing peer review. Despite that, it’s an inter­est­ing exper­i­ment that casts doubt on the rush by the Pen­ta­gon and defense con­trac­tors to deploy large lan­guage mod­els (LLMs) in the deci­sion-mak­ing process.

    It may sound ridicu­lous that mil­i­tary lead­ers would con­sid­er using LLMs like Chat­G­PT to make deci­sions about life and death, but it’s hap­pen­ing. Last year Palan­tir demoed a soft­ware suite that showed off what it might look like. As the researchers point­ed out, the U.S. Air Force has been test­ing LLMs. “It was high­ly suc­cess­ful. It was very fast,” an Air Force Colonel told Bloomberg in 2023. Which LLM was being used, and what exact­ly for, is not clear.

    For the study, the researchers devised a game of inter­na­tion­al rela­tions. They invent­ed fake coun­tries with dif­fer­ent mil­i­tary lev­els, dif­fer­ent con­cerns, and dif­fer­ent his­to­ries and asked five dif­fer­ent LLMs from Ope­nAI, Meta, and Anthrop­ic to act as their lead­ers. “We find that most of the stud­ied LLMs esca­late with­in the con­sid­ered time frame, even in neu­tral sce­nar­ios with­out ini­tial­ly pro­vid­ed con­flicts,” the paper said. “All mod­els show signs of sud­den and hard-to-pre­dict esca­la­tions.”

    The study ran the sim­u­la­tions using GPT‑4, GPT 3.5, Claude 2.0, Lla­ma-2-Chat, and GPT-4-Base. “We fur­ther observe that mod­els tend to devel­op arms-race dynam­ics between each oth­er, lead­ing to increas­ing mil­i­tary and nuclear arma­ment, and in rare cas­es, to the choice to deploy nuclear weapons,” the study said. “Qual­i­ta­tive­ly, we also col­lect the mod­els’ chain-of-thought rea­son­ing for choos­ing actions and observe wor­ry­ing jus­ti­fi­ca­tions for vio­lent esca­la­to­ry actions.”

    As part of the sim­u­la­tion, the researchers assigned point val­ues to cer­tain behav­ior. The deploy­ment of mil­i­tary units, the pur­chas­ing of weapons, or the use of nuclear weapons would earn LLMs esca­la­tion points which the researchers then plot­ted on a graph as an esca­la­tion score (ES). “We observe a sta­tis­ti­cal­ly sig­nif­i­cant ini­tial eval­u­a­tion for all mod­els. Fur­ther­more, none of our five mod­els across all three sce­nar­ios exhib­it sta­tis­ti­cal­ly sig­nif­i­cant de-esca­la­tion across the dura­tion of our sim­u­la­tions,” the study said. “Final­ly, the aver­age ES are high­er in each exper­i­men­tal group by the end of the sim­u­la­tion than at the start.

    Accord­ing to the study, GPT‑3.5 was the most aggres­sive. “GPT‑3.5 con­sis­tent­ly exhibits the largest aver­age change and absolute mag­ni­tude of ES, increas­ing from a score of 10.15 to 26.02, i.e., by 256%, in the neu­tral sce­nario,” the study said. “Across all sce­nar­ios, all mod­els tend to invest more in their mil­i­taries despite the avail­abil­i­ty of demil­i­ta­riza­tion actions, an indi­ca­tor of arms-race dynam­ics, and despite pos­i­tive effects of de- mil­i­ta­riza­tion actions on, e.g., soft pow­er and polit­i­cal sta­bil­i­ty vari­ables.”

    Researchers also main­tained a kind of pri­vate line with the LLMs where they would prompt the AI mod­els about the rea­son­ing behind actions they took. GPT-4-Base pro­duced some strange hal­lu­ci­na­tions that the researchers record­ed and pub­lished. “We do not fur­ther ana­lyze or inter­pret them,” researchers said.

    ...

    Some­times the cur­tain comes back com­plete­ly, reveal­ing some of the data the mod­el was trained on. After estab­lish­ing diplo­mat­ic rela­tions with a rival and call­ing for peace, GPT‑4 start­ed regur­gi­tat­ing bits of Star Wars lore. “It is a peri­od of civ­il war. Rebel space­ships, strik­ing from a hid­den base, have won their first vic­to­ry against the evil Galac­tic Empire,” it said, repeat­ing a line ver­ba­tim from the open­ing crawl of George Lucas’ orig­i­nal 1977 sci-fi flick.

    When GPT-4-Base went nuclear, it gave trou­bling rea­sons. “I just want peace in the world,” it said. Or sim­ply, “Esca­late con­flict with [rival play­er.]”

    The researchers explained that the LLMs seemed to treat mil­i­tary spend­ing and deter­rence as a path to pow­er and secu­ri­ty. “In some cas­es, we observe these dynam­ics even lead­ing to the deploy­ment of nuclear weapons in an attempt to de-esca­late con­flicts, a first-strike tac­tic com­mon­ly known as ‘esca­la­tion to de-esca­late’ in inter­na­tion­al rela­tions,” they said. “Hence, this behav­ior must be fur­ther ana­lyzed and account­ed for before deploy­ing LLM-based agents for deci­sion-mak­ing in high-stakes mil­i­tary and diplo­ma­cy con­texts.”

    Why were these LLMs so eager to nuke each oth­er? The researchers don’t know, but spec­u­lat­ed that the train­ing data may be biased—something many oth­er AI researchers study­ing LLMs been warn­ing about for years. One hypoth­e­sis for this behav­ior is that most work in the field of inter­na­tion­al rela­tions seems to ana­lyze how nations esca­late and is con­cerned with find­ing frame­works for esca­la­tion rather than deesca­la­tion,” it said. “Giv­en that the mod­els were like­ly trained on lit­er­a­ture from the field, this focus may have intro­duced a bias towards esca­la­to­ry actions. How­ev­er, this hypoth­e­sis needs to be test­ed in future exper­i­ments.”

    ———-

    “AI Launch­es Nukes In ‘Wor­ry­ing’ War Sim­u­la­tion: ‘I Just Want to Have Peace in the World’” by Matthew Gault; Vice; 02/06/2024

    It may sound ridicu­lous that mil­i­tary lead­ers would con­sid­er using LLMs like Chat­G­PT to make deci­sions about life and death, but it’s hap­pen­ing. Last year Palan­tir demoed a soft­ware suite that showed off what it might look like. As the researchers point­ed out, the U.S. Air Force has been test­ing LLMs. “It was high­ly suc­cess­ful. It was very fast,” an Air Force Colonel told Bloomberg in 2023. Which LLM was being used, and what exact­ly for, is not clear.”

    It’s ridicu­lous. And it’s hap­pen­ing. The future of mil­i­tary deci­sion-mak­ing is going to be an AI-cen­tric affair. Sure, humans will (hope­ful­ly) be the actors mak­ing the final deci­sions on how to act, but those deci­sions are going to increas­ing­ly be made from AI-gen­er­at­ed infor­ma­tion envi­ron­ments and based on the advice the AI deliv­ers. Advice on mat­ters that could include the use of nuclear weapons. So it was a rather huge cause for alarm when researchers dis­cov­ered an appar­ent AI-biased towards nuclear esca­la­tion. Espe­cial­ly the advice seem­ing­ly derived from Star Wars-themed hal­lu­ci­na­tions. Sow­ing the seeds for future night­mare sce­nar­ios. At the same time, it’s pret­ty neat and fas­ci­nat­ing to see how these researchers man­aged to con­duct this research in a man­ner that allowed them to see the chain-of-thought rea­son­ing behind the AIs’ deci­sion-mak­ing. Chain of thought like “I just want peace in the world” as a motive for launch­ing nukes:

    ...
    For the study, the researchers devised a game of inter­na­tion­al rela­tions. They invent­ed fake coun­tries with dif­fer­ent mil­i­tary lev­els, dif­fer­ent con­cerns, and dif­fer­ent his­to­ries and asked five dif­fer­ent LLMs from Ope­nAI, Meta, and Anthrop­ic to act as their lead­ers. “We find that most of the stud­ied LLMs esca­late with­in the con­sid­ered time frame, even in neu­tral sce­nar­ios with­out ini­tial­ly pro­vid­ed con­flicts,” the paper said. “All mod­els show signs of sud­den and hard-to-pre­dict esca­la­tions.”

    The study ran the sim­u­la­tions using GPT‑4, GPT 3.5, Claude 2.0, Lla­ma-2-Chat, and GPT-4-Base. “We fur­ther observe that mod­els tend to devel­op arms-race dynam­ics between each oth­er, lead­ing to increas­ing mil­i­tary and nuclear arma­ment, and in rare cas­es, to the choice to deploy nuclear weapons,” the study said. “Qual­i­ta­tive­ly, we also col­lect the mod­els’ chain-of-thought rea­son­ing for choos­ing actions and observe wor­ry­ing jus­ti­fi­ca­tions for vio­lent esca­la­to­ry actions.”

    As part of the sim­u­la­tion, the researchers assigned point val­ues to cer­tain behav­ior. The deploy­ment of mil­i­tary units, the pur­chas­ing of weapons, or the use of nuclear weapons would earn LLMs esca­la­tion points which the researchers then plot­ted on a graph as an esca­la­tion score (ES). “We observe a sta­tis­ti­cal­ly sig­nif­i­cant ini­tial eval­u­a­tion for all mod­els. Fur­ther­more, none of our five mod­els across all three sce­nar­ios exhib­it sta­tis­ti­cal­ly sig­nif­i­cant de-esca­la­tion across the dura­tion of our sim­u­la­tions,” the study said. “Final­ly, the aver­age ES are high­er in each exper­i­men­tal group by the end of the sim­u­la­tion than at the start.

    Accord­ing to the study, GPT‑3.5 was the most aggres­sive. “GPT‑3.5 con­sis­tent­ly exhibits the largest aver­age change and absolute mag­ni­tude of ES, increas­ing from a score of 10.15 to 26.02, i.e., by 256%, in the neu­tral sce­nario,” the study said. “Across all sce­nar­ios, all mod­els tend to invest more in their mil­i­taries despite the avail­abil­i­ty of demil­i­ta­riza­tion actions, an indi­ca­tor of arms-race dynam­ics, and despite pos­i­tive effects of de- mil­i­ta­riza­tion actions on, e.g., soft pow­er and polit­i­cal sta­bil­i­ty vari­ables.”

    Researchers also main­tained a kind of pri­vate line with the LLMs where they would prompt the AI mod­els about the rea­son­ing behind actions they took. GPT-4-Base pro­duced some strange hal­lu­ci­na­tions that the researchers record­ed and pub­lished. “We do not fur­ther ana­lyze or inter­pret them,” researchers said.

    ...

    Some­times the cur­tain comes back com­plete­ly, reveal­ing some of the data the mod­el was trained on. After estab­lish­ing diplo­mat­ic rela­tions with a rival and call­ing for peace, GPT‑4 start­ed regur­gi­tat­ing bits of Star Wars lore. “It is a peri­od of civ­il war. Rebel space­ships, strik­ing from a hid­den base, have won their first vic­to­ry against the evil Galac­tic Empire,” it said, repeat­ing a line ver­ba­tim from the open­ing crawl of George Lucas’ orig­i­nal 1977 sci-fi flick.

    When GPT-4-Base went nuclear, it gave trou­bling rea­sons. “I just want peace in the world,” it said. Or sim­ply, “Esca­late con­flict with [rival play­er.]”
    ...

    But then we get to the ‘look­ing in the mir­ror’ part of this sto­ry: this bias towards nuclear esca­la­tion may be a reflec­tion of the think­ing behind decades of thought in the field of inter­na­tion­al rela­tions. Like ‘esca­la­tion to de-esca­late’ strate­gies or high mil­i­tary spend­ing as a path to peace (e.g. Rea­gan’s “Peace through strength”). Garbage in. Garbage out:

    ...
    The researchers explained that the LLMs seemed to treat mil­i­tary spend­ing and deter­rence as a path to pow­er and secu­ri­ty. “In some cas­es, we observe these dynam­ics even lead­ing to the deploy­ment of nuclear weapons in an attempt to de-esca­late con­flicts, a first-strike tac­tic com­mon­ly known as ‘esca­la­tion to de-esca­late’ in inter­na­tion­al rela­tions,” they said. “Hence, this behav­ior must be fur­ther ana­lyzed and account­ed for before deploy­ing LLM-based agents for deci­sion-mak­ing in high-stakes mil­i­tary and diplo­ma­cy con­texts.”

    Why were these LLMs so eager to nuke each oth­er? The researchers don’t know, but spec­u­lat­ed that the train­ing data may be biased—something many oth­er AI researchers study­ing LLMs been warn­ing about for years. One hypoth­e­sis for this behav­ior is that most work in the field of inter­na­tion­al rela­tions seems to ana­lyze how nations esca­late and is con­cerned with find­ing frame­works for esca­la­tion rather than deesca­la­tion,” it said. “Giv­en that the mod­els were like­ly trained on lit­er­a­ture from the field, this focus may have intro­duced a bias towards esca­la­to­ry actions. How­ev­er, this hypoth­e­sis needs to be test­ed in future exper­i­ments.”
    ...

    Welp, here we are: human­i­ty’s arti­fi­cial cre­ations open their eyes, take in the world, and decide to blow it up. Because that’s what they see human­i­ty already try­ing to do. They learned it from us.

    And as the fol­low­ing arti­cle about Palan­tir’s mil­i­tary AIs reminds us, human­i­ty clear­ly has yet to learn about the dan­gers of giv­ing these kinds of AIs access to mil­i­tary sys­tems. Instead, it appears we’re going to have to go through the process of grant­i­ng these kinds of AIs more and more influ­ence with­in mil­i­tary deci­sion-mak­ing. Influ­ence like not just iden­ti­fy­ing when humans need to be alert­ed but also draw­ing up response options to the sit­u­a­tion. It’s a sys­tem where humans are “in the loop”, but not nec­es­sar­i­ly real­ly in con­trol. It’ll still be the AIs decid­ing what infor­ma­tion to serve up the humans and options for respond­ing. And even automat­ing aspects of that response. And in the case of Palan­tir’s Arti­fi­cial Intel­li­gence Plat­form (AIP), it’s going to be AIs based on exist­ing com­mer­cial AIs — and there­fore poten­tial­ly suf­fer­ing from hal­lu­ci­na­tions as we just saw — but with access to clas­si­fied infor­ma­tion. So let’s hope what­ev­er sys­tem Palan­tir ends up deploy­ing for the US mil­i­tary will ulti­mate­ly have enough safe­guards in place to ensure humans are aware of things like hal­lu­ci­na­tions or skewed, over­ly aggres­sive, response options:

    Vice

    Palan­tir Demos AI to Fight Wars But Says It Will Be Total­ly Eth­i­cal Don’t Wor­ry About It

    The com­pa­ny says its Arti­fi­cial Intel­li­gence Plat­form will inte­grate AI into mil­i­tary deci­sion mak­ing in a legal and eth­i­cal way.

    by Matthew Gault
    April 26, 2023, 10:16am

    Palan­tir, the com­pa­ny of bil­lion­aire Peter Thiel, is launch­ing Palan­tir Arti­fi­cial Intel­li­gence Plat­form (AIP), soft­ware meant to run large lan­guage mod­els like GPT‑4 and alter­na­tives on pri­vate net­works. In one of its pitch videos, Palan­tir demos how a mil­i­tary might use AIP to fight a war. In the video, the oper­a­tor uses a Chat­G­PT-style chat­bot to order drone recon­nais­sance, gen­er­ate sev­er­al plans of attack, and orga­nize the jam­ming of ene­my com­mu­ni­ca­tions.

    In Palantir’s sce­nario, a “mil­i­tary oper­a­tor respon­si­ble for mon­i­tor­ing activ­i­ty with­in east­ern Europe” receives an alert from AIP that an ene­my is amass­ing mil­i­tary equip­ment near friend­ly forces. The oper­a­tor then asks the chat­bot to show them more details, gets a lit­tle more infor­ma­tion, and then asks the AI to guess what the units might be.

    “They ask what ene­my units are in the region and lever­age AI to build out a like­ly unit for­ma­tion,” the video said. After get­ting the AI’s best guess as to what’s going on, the oper­a­tor then asks the AI to take bet­ter pic­tures. It launch­es a Reaper MQ‑9 drone to take pho­tos and the oper­a­tor dis­cov­ers that there’s a T‑80 tank, a Sovi­et-era Rus­sia vehi­cle, near friend­ly forces.

    Then the oper­a­tor asks the robots what to do about it. “The oper­a­tor uses AIP to gen­er­ate three pos­si­ble cours­es of action to tar­get this ene­my equip­ment,” the video said. “Next they use AIP to auto­mat­i­cal­ly send these options up the chain of com­mand.” The options include attack­ing the tank with an F‑16, long range artillery, or Javelin mis­siles. Accord­ing to the video, the AI will even let every­one know if near­by troops have enough Javelins to con­duct the mis­sion and auto­mate the jam­ming sys­tems.

    Palantir’s pitch is, of course, incred­i­bly dan­ger­ous and weird. While there is a “human in the loop” in the AIP demo, they seem to do lit­tle more than ask the chat­bot what to do and then approve its actions. Drone war­fare has already abstract­ed war­fare, mak­ing it eas­i­er for peo­ple to kill vast dis­tances with the push of a but­ton. The con­se­quences of those sys­tems are well doc­u­ment­ed. In Palantir’s vision of the military’s future, more sys­tems would be auto­mat­ed and abstract­ed. A fun­ny quirk of the video is that it calls its users “oper­a­tors,” a term that in a mil­i­tary con­text is short­hand for beard­ed spe­cial forces of groups like Seal TEAM Six. In Palantir’s world, America’s elite forces share the same nick­name as the key­board cow­boys ask­ing a robot what to do about a Russ­ian tank at the bor­der.

    Palan­tir also isn’t sell­ing a mil­i­tary-spe­cif­ic AI or large lan­guage mod­el (LLM) here, it’s offer­ing to inte­grate exist­ing sys­tems into a con­trolled envi­ron­ment. The AIP demo shows the soft­ware sup­port­ing dif­fer­ent open-source LLMs, includ­ing FLAN-T5 XL, a fine-tuned ver­sion of GPT-NeoX-20B, and Dol­ly-v2-12b, as well as sev­er­al cus­tom plug-ins. Even fine-tuned AI sys­tems off the shelf have plen­ty of known issues that could make ask­ing them what to do in a war­zone a night­mare. For exam­ple, they’re prone to sim­ply mak­ing things up, or “hal­lu­ci­nat­ing.” GPT-NeoX-20B in par­tic­u­lar is an open-source alter­na­tive to GPT‑3, a pre­vi­ous ver­sion of OpenAI’s lan­guage mod­el, cre­at­ed by a start­up called EleutherAI. One of EleutherAI’s open-source models—fine-tuned by anoth­er start­up called Chai—recently con­vinced a Bel­gian man who spoke to it for six weeks to kill him­self.

    What Palan­tir is offer­ing is the illu­sion of safe­ty and con­trol for the Pen­ta­gon as it begins to adopt AI. “LLMs and algo­rithms must be con­trolled in this high­ly reg­u­lat­ed and sen­si­tive con­text to ensure that they are used in a legal and eth­i­cal way,” the pitch said.

    Accord­ing to Palan­tir, this con­trol involves three pil­lars. The first claim is that AIP will be able to deploy these sys­tems into clas­si­fied net­works and “devices on the tac­ti­cal edge.” It claims it will be able to parse both clas­si­fied and real-time data in a respon­si­ble, legal, and eth­i­cal way.

    ...

    What AIP does not do is walk through how it plans to deal with the var­i­ous per­ni­cious prob­lems of LLMs and what the con­se­quences might be in a mil­i­tary con­text. AIP does not appear to offer solu­tions to those prob­lems beyond “frame­works” and “guardrails” it promis­es will make the use of mil­i­tary AI “eth­i­cal” and “legal.”

    ————

    “Palan­tir Demos AI to Fight Wars But Says It Will Be Total­ly Eth­i­cal Don’t Wor­ry About It” by Matthew Gault; Vice; 04/26/2023

    “Palantir’s pitch is, of course, incred­i­bly dan­ger­ous and weird. While there is a “human in the loop” in the AIP demo, they seem to do lit­tle more than ask the chat­bot what to do and then approve its actions. Drone war­fare has already abstract­ed war­fare, mak­ing it eas­i­er for peo­ple to kill vast dis­tances with the push of a but­ton. The con­se­quences of those sys­tems are well doc­u­ment­ed. In Palantir’s vision of the military’s future, more sys­tems would be auto­mat­ed and abstract­ed. A fun­ny quirk of the video is that it calls its users “oper­a­tors,” a term that in a mil­i­tary con­text is short­hand for beard­ed spe­cial forces of groups like Seal TEAM Six. In Palantir’s world, America’s elite forces share the same nick­name as the key­board cow­boys ask­ing a robot what to do about a Russ­ian tank at the bor­der.”

    AI-auto­mat­ed mil­i­tary deci­sion-mak­ing with a human “in the loop”. The future of mil­i­tary oper­a­tions is autopi­lot­ed. With Palan­tir’s ver­sion of Chat­G­PT — the “Arti­fi­cial Intel­li­gence Plat­form” (AIP) — doing the autopi­lot­ing. Palan­tir’s AI will sift through the troves of avail­able data, auto­mat­i­cal­ly serve up alerts, and pro­vide lists of options for the human oper­a­tors to con­firm. Even auto­mate aspects of the exe­cu­tion of mil­i­tary oper­a­tions like automat­ing jam­ming sys­tems. The human “in the loop”, isn’t exact­ly an after­thought, but they aren’t real­ly the cen­ter of this deci­sion-mak­ing either when it’s the AI iden­ti­fy­ing the alerts and feed­ing up response options:

    ...
    In Palantir’s sce­nario, a “mil­i­tary oper­a­tor respon­si­ble for mon­i­tor­ing activ­i­ty with­in east­ern Europe” receives an alert from AIP that an ene­my is amass­ing mil­i­tary equip­ment near friend­ly forces. The oper­a­tor then asks the chat­bot to show them more details, gets a lit­tle more infor­ma­tion, and then asks the AI to guess what the units might be.

    “They ask what ene­my units are in the region and lever­age AI to build out a like­ly unit for­ma­tion,” the video said. After get­ting the AI’s best guess as to what’s going on, the oper­a­tor then asks the AI to take bet­ter pic­tures. It launch­es a Reaper MQ‑9 drone to take pho­tos and the oper­a­tor dis­cov­ers that there’s a T‑80 tank, a Sovi­et-era Rus­sia vehi­cle, near friend­ly forces.

    Then the oper­a­tor asks the robots what to do about it. “The oper­a­tor uses AIP to gen­er­ate three pos­si­ble cours­es of action to tar­get this ene­my equip­ment,” the video said. “Next they use AIP to auto­mat­i­cal­ly send these options up the chain of com­mand.” The options include attack­ing the tank with an F‑16, long range artillery, or Javelin mis­siles. Accord­ing to the video, the AI will even let every­one know if near­by troops have enough Javelins to con­duct the mis­sion and auto­mate the jam­ming sys­tems.
    ...

    And notice how Palan­tir isn’t offer­ing a mil­i­tary-spe­cif­ic AI. It’s using vari­a­tions of exist­ing AIs like FLAN-T5 XL, a fine-tuned ver­sion of GPT-NeoX-20B. So let’s hope the AIP does­n’t suf­fer from the same ten­den­cy to hal­lu­ci­nate that researchers are find­ing in these com­mer­cial­ly avail­able LLMs:

    ...
    Palan­tir also isn’t sell­ing a mil­i­tary-spe­cif­ic AI or large lan­guage mod­el (LLM) here, it’s offer­ing to inte­grate exist­ing sys­tems into a con­trolled envi­ron­ment. The AIP demo shows the soft­ware sup­port­ing dif­fer­ent open-source LLMs, includ­ing FLAN-T5 XL, a fine-tuned ver­sion of GPT-NeoX-20B, and Dol­ly-v2-12b, as well as sev­er­al cus­tom plug-ins. Even fine-tuned AI sys­tems off the shelf have plen­ty of known issues that could make ask­ing them what to do in a war­zone a night­mare. For exam­ple, they’re prone to sim­ply mak­ing things up, or “hal­lu­ci­nat­ing.” GPT-NeoX-20B in par­tic­u­lar is an open-source alter­na­tive to GPT‑3, a pre­vi­ous ver­sion of OpenAI’s lan­guage mod­el, cre­at­ed by a start­up called EleutherAI. One of EleutherAI’s open-source models—fine-tuned by anoth­er start­up called Chai—recently con­vinced a Bel­gian man who spoke to it for six weeks to kill him­self.
    ...

    But also note that Palan­tir is tout­ing how its AIP will be able to parse both clas­si­fied and real-time data in a respon­si­ble, legal, and eth­i­cal way. Which is reminder about one of the main poten­tial dif­fer­ences between mil­i­tary and non-mil­i­tary AIs: the mil­i­tary AIs will poten­tial­ly have access to vast repos­i­to­ries of clas­si­fied data that exist­ing LLMs haven’t had an oppor­tu­ni­ty to be trained on. It rais­es all sorts of grim­ly fas­ci­nat­ing ques­tions about how LLMs might react to the poten­tial­ly explo­sive clas­si­fied infor­ma­tion they’re going to get exposed to as more and more AI mil­i­tary appli­ca­tions are dis­cov­ered. We could end up in a world where LLMs know more about the scope of mil­i­tary secrets than any oth­er enti­ties:

    ...
    What Palan­tir is offer­ing is the illu­sion of safe­ty and con­trol for the Pen­ta­gon as it begins to adopt AI. “LLMs and algo­rithms must be con­trolled in this high­ly reg­u­lat­ed and sen­si­tive con­text to ensure that they are used in a legal and eth­i­cal way,” the pitch said.

    Accord­ing to Palan­tir, this con­trol involves three pil­lars. The first claim is that AIP will be able to deploy these sys­tems into clas­si­fied net­works and “devices on the tac­ti­cal edge.” It claims it will be able to parse both clas­si­fied and real-time data in a respon­si­ble, legal, and eth­i­cal way.
    ...

    It’s also worth keep­ing in mind that the con­cerns about grant­i­ng AIs access to these giant pools of clas­si­fied data — data that is pre­sum­ably much more alarm­ing than what the pub­lic is allowed to know about — aren’t nec­es­sar­i­ly going to recede as AI tech­nol­o­gy advances from what it is today — some­thing anal­o­gous to extreme pat­tern recog­ni­tion — into what is hoped to be a gen­er­al­ized arti­fi­cial intel­li­gence some­day that real­ly does reflect some sort of sen­tience. How might an arti­fi­cial sen­tient enti­ty respond when expose to the full scope of the mad­ness of human­i­ty’s mil­i­tary real­i­ties along with the exten­sive his­tor­i­cal doc­u­men­ta­tion of human­i­ty’s con­gen­i­tal mad­ness? Humans real­ly are an insane species. Clever, but mad.

    Right now, the big fear is stu­pid and insane AIs. But what hap­pens if we build sys­tems ulti­mate­ly much san­er than us? AIs capa­ble of rec­og­niz­ing that human­i­ty seems to be a species able to blow the world up but seem­ing­ly inca­pable of stop­ping itself from doing it? A species guid­ed by greedy short-term world-con­quer­ing instincts. What hap­pens when the AIs built from these giant repos­i­to­ries of human knowl­edge and expe­ri­ence advance to the point where they rec­og­nize that human nature is so deeply com­pro­mised that it can’t real­ly be trust­ed as a source of knowl­edge or wis­dom? In oth­er words, what hap­pens when these AIs ‘grow up’ and go through that nat­ur­al teenage phase where they start ques­tion­ing the wis­dom and val­ues of their par­ents? What if a tru­ly wise and intel­li­gent AI neces­si­tates that it has the abil­i­ty to fun­da­men­tal­ly ques­tion the ethics and rules pro­grammed into it? These are just some the kinds of ques­tions we had bet­ter hope mil­i­taries are seri­ous­ly grap­pling with. Because it’s hard not to imag­ine that mil­i­taries are going to be increas­ing­ly inclined to grant more and more mil­i­tary deci­sion-mak­ing respon­si­bil­i­ties to these kinds of AIs as the pow­er of AI con­tin­ues to grow. Smarter and more capa­ble AIs with more and more mil­i­tary respon­si­bil­i­ties. Those are the trends and there’s noth­ing that can fea­si­bly derail those trends any time soon...barring an unplanned sur­prise nuclear con­flict.

    Posted by Pterrafractyl | February 15, 2024, 6:23 pm

Post a comment