π Digital π Science and technology
What are the next challenges for AI?

The future of brain-machine synchronisation

Hamilton Mann, Group Vice President of Digital Marketing and Digital Transformation at Thales and Senior Lecturer at INSEAD, Cornelia C. Walther, Senior Visiting Scientist at Wharton Initiative for Neuroscience (WiN) and Michael Platt, Director of the Wharton Neuroscience Initiative and a Professor of Marketing, Neuroscience, and Psychology at the University of Pennsylvania
On October 30th, 2024 |
9 min reading time
Hamilton Mann
Hamilton Mann
Group Vice President of Digital Marketing and Digital Transformation at Thales and Senior Lecturer at INSEAD
Cornelia C. Walther
Cornelia C. Walther
Senior Visiting Scientist at Wharton Initiative for Neuroscience (WiN)
Michael Platt
Michael Platt
Director of the Wharton Neuroscience Initiative and a Professor of Marketing, Neuroscience, and Psychology at the University of Pennsylvania
Key takeaways
  • The evolution of AI represents a breakthrough in the relationship between humans and machines.
  • AI is now capable of generating responses similar to those of humans and adapting to the contexts of their interactions.
  • Advances such as brain-computer interfaces (BCIs) make it possible for AI to connect with human thoughts and emotions.
  • Neuroscience can also guide the development of AI, for example through the alternative of neuromorphic computing.
  • Despite their positive implications, the human-machine relationship raises major ethical issues, notably concerning data confidentiality and the preservation of human autonomy.

The remark­able evo­lu­tion of Arti­fi­cial Intel­li­gence (AI) sys­tems rep­re­sents a par­a­digm shift in the rela­tion­ship between humans and machines. This trans­for­ma­tion is evi­dent in the seam­less inter­ac­tions facil­i­tat­ed by these advanced sys­tems, where adapt­abil­i­ty emerges as a defin­ing char­ac­ter­is­tic, res­onat­ing with the fun­da­men­tal human capac­i­ty to learn from expe­ri­ence and pre­dict behaviour.

AI mimics human learning

One facet of AI that aligns close­ly with human cog­ni­tive process­es is Rein­force­ment Learn­ing (RL). RL mim­ics the human learn­ing par­a­digm by allow­ing AI sys­tems to learn through inter­ac­tion with an envi­ron­ment, receiv­ing feed­back in the form of rewards or penal­ties. By con­trast, Large Lan­guage Mod­els (LLMs) play a cru­cial role in pat­tern recog­ni­tion, cap­tur­ing the intri­cate nuances of human lan­guage and behav­iour. These mod­els, such as Chat­G­PT and BERT, excel in under­stand­ing con­tex­tu­al infor­ma­tion, grasp­ing the sub­tleties of lan­guage, and pre­dict­ing user intent. Lever­ag­ing vast datasets, LLMs acquire a com­pre­hen­sive under­stand­ing of lin­guis­tic pat­terns, enabling them to gen­er­ate human-like respons­es and adapt to some of the user behav­iour, some­times with remark­able accuracy.

The syn­er­gy between RL and LLMs cre­ates a pow­er­ful pre­dic­tor of human behav­iour. RL con­tributes the abil­i­ty to learn from inter­ac­tions and adapt, while LLMs enhance the pre­dic­tion capa­bil­i­ties through pat­tern recog­ni­tion. AI sys­tems based on RL can thus dis­play a form of behav­iour­al syn­chrony. At its core, RL enables AI sys­tems to learn opti­mal sequences of actions in inter­ac­tive envi­ron­ments to achieve a pol­i­cy. Anal­o­gous to a child touch­ing a hot sur­face and learn­ing to avoid it, these AI sys­tems adapt based on the pos­i­tive or neg­a­tive feed­back they receive.

AI replicates human interactions

AI agents using deep rein­force­ment learn­ing, such as Google Deep­Mind’s Alp­haZe­ro, learn and improve by play­ing mil­lions of games against them­selves, there­by refin­ing their strate­gies over time. This self-improve­ment process in AI involves an agent iter­a­tive­ly learn­ing from its own actions and out­comes. Sim­i­lar­ly, in human inter­ac­tions, brain syn­chrony occurs when indi­vid­u­als engage in coop­er­a­tive tasks, lead­ing to aligned pat­terns of brain activ­i­ty that facil­i­tate shared under­stand­ing and col­lab­o­ra­tion. Unlike AI, humans achieve this syn­chrony through inter­ac­tion with oth­ers rather than themselves.

What’s more, AI sys­tems can also learn from inter­ac­tions with humans. Just as human brain syn­chrony enhances coop­er­a­tion and under­stand­ing, AI sys­tems can improve and align their respons­es through exten­sive iter­a­tive learn­ing from human inter­ac­tions. While AI sys­tems do not lit­er­al­ly share knowl­edge as human brains do, they become repos­i­to­ries of data inher­it­ed from these inter­ac­tions, which cor­re­sponds to a form of knowl­edge. This process of learn­ing from vast datasets, includ­ing human inter­ac­tions, can be seen as a form of ‘col­lec­tive mem­o­ry’. This anal­o­gy high­lights the poten­tial for AI sys­tems to evolve while being influ­enced by humans, while also influ­enc­ing humans through their use, indi­cat­ing a form of ‘com­pu­ta­tion­al syn­chrony’ that could be seen as an ana­logue to human brain synchrony.

In addi­tion, AI sys­tems enabled with social cue recog­ni­tion are being designed to detect and respond to human emo­tions. These ‘Affec­tive Com­put­ing’ sys­tems, as coined by Ros­alind Picard in 19951, can inter­pret human facial expres­sions, voice mod­u­la­tions, and even text to gauge emo­tions and then respond accord­ing­ly. An AI assis­tant that can detect user frus­tra­tion in real-time and adjust its respons­es or assis­tance strat­e­gy is a rudi­men­ta­ry form of behav­iour­al syn­chro­ni­sa­tion based on imme­di­ate feedback.

For instance, affec­tive com­put­ing encom­pass­es tech­nolo­gies like emo­tion recog­ni­tion soft­ware that analy­ses facial expres­sions and voice tone to deter­mine a person’s emo­tion­al state. Real-time sen­ti­ment analy­sis in text and voice allows AI to adjust its inter­ac­tions to be more empa­thet­ic and effec­tive. This capa­bil­i­ty is increas­ing­ly used in cus­tomer ser­vice chat­bots and vir­tu­al assis­tants to improve user expe­ri­ence by mak­ing inter­ac­tions feel more nat­ur­al and responsive.

Just as humans adjust their behav­iour in response to social cues, adap­tive AI sys­tems mod­i­fy their actions based on user input, poten­tial­ly lead­ing to a form of ‘syn­chro­ni­sa­tion’ over time. Assess­ing the social com­pe­tence of such an AI sys­tem could be done by adapt­ing tools like the Social Respon­sive­ness Scale (SRS)—a well-val­i­dat­ed psy­chi­atric instru­ment that mea­sures how adept an indi­vid­ual is at mod­i­fy­ing their behav­iour to fit the behav­iour and dis­po­si­tion of a social part­ner, a proxy for ‘the­o­ry of mind,’ which refers to the abil­i­ty to attribute men­tal states—such as beliefs, intents, desires, emo­tions, and knowledge—to one­self and to others.

Moving towards resonance

Brain-Com­put­er Inter­faces (BCIs) have ush­ered in a trans­for­ma­tive era in which thoughts can be trans­lat­ed into dig­i­tal com­mands and human com­mu­ni­ca­tion. Com­pa­nies like Neu­ralink are mak­ing strides devel­op­ing inter­faces that enable paral­ysed indi­vid­u­als to con­trol devices direct­ly with their thoughts. Con­nect­ing direct record­ings of brain activ­i­ty with AI sys­tems, researchers enabled an indi­vid­ual to speak at nor­mal con­ver­sa­tion­al speed after being mute for more than a decade fol­low­ing a stroke. AI sys­tems can also be used to decode not only what an indi­vid­ual is read­ing but what they are think­ing based on non-inva­sive mea­sures of brain activ­i­ty using func­tion­al MRI.

Based on these advances, it’s not far-fetched to imag­ine a future sce­nario in which a pro­fes­sion­al uses a non-inva­sive BCI (e.g., wear­able brain­wave mon­i­tors such as Cog­wear, Emo­tiv, or Muse) to com­mu­ni­cate with AI design soft­ware. The soft­ware, recog­nis­ing the designer’s neur­al pat­terns asso­ci­at­ed with cre­ativ­i­ty or dis­sat­is­fac­tion, could instan­ta­neous­ly adjust its design pro­pos­als, achiev­ing a lev­el of syn­chrony pre­vi­ous­ly thought to be the realm of sci­ence fic­tion. This tech­no­log­i­cal fron­tier holds the promise of a dis­tinc­tive form of syn­chrony, where the inter­play between the human brain and AI tran­scends mere com­mand inter­pre­ta­tion, open­ing up a future in which AI res­onates with human thoughts and emotions.

Cru­cial­ly, the res­o­nance envi­sioned here tran­scends the behav­iour­al domain to encom­pass com­mu­ni­ca­tion as well. As BCIs evolve, the poten­tial for out­ward expres­sions becomes piv­otal. Beyond mere com­mand exe­cu­tion, the inte­gra­tion of facial cues, tone of voice, and oth­er non-ver­bal cues into AI’s respons­es ampli­fies the chan­nels for res­o­nance. This expan­sion into mul­ti­modal com­mu­ni­ca­tion may enrich syn­chrony by cap­tur­ing ele­ments from the holis­tic nature of human expres­sion, cre­at­ing a more immer­sive and nat­ur­al interaction.

How­ev­er, the con­cept of res­o­nance also presents the chal­lenge of nav­i­gat­ing the uncan­ny val­ley, a phe­nom­e­non where humanoid enti­ties that close­ly resem­ble humans pro­voke dis­com­fort. Strik­ing the right bal­ance is para­mount, ensur­ing the AI’s respon­sive­ness aligns authen­ti­cal­ly with human expres­sions, with­out enter­ing the dis­com­fit­ing realm of the uncan­ny val­ley. The poten­tial of BCIs to fos­ter syn­chrony between the human brain and AI intro­duces promis­ing yet chal­leng­ing prospects for human-com­put­er collaboration.

Turning to neuroscience

Neu­ro­science not only illu­mi­nates the basis of bio­log­i­cal intel­li­gence but may also guide devel­op­ment of arti­fi­cial intel­li­gence2. Con­sid­er­ing evo­lu­tion­ary con­straints like space and com­mu­ni­ca­tion effi­cien­cy, which have shaped the emer­gence of effi­cient sys­tems in nature, prompts explo­ration of embed­ding sim­i­lar con­straints in AI sys­tems, envi­sion­ing organ­i­cal­ly evolv­ing arti­fi­cial envi­ron­ments opti­mised for effi­cien­cy and envi­ron­men­tal sus­tain­abil­i­ty, the focus of research in so-called “neu­ro­mor­phic computing.” 

For exam­ple, oscil­la­to­ry neur­al activ­i­ty appears to boost com­mu­ni­ca­tion between dis­tant brain areas. The brain employs a theta-gam­ma rhythm to pack­age and trans­mit infor­ma­tion, sim­i­lar to a postal ser­vice, there­by enhanc­ing effi­cient data trans­mis­sion and retrieval3. This inter­play has been likened to an advanced data trans­mis­sion sys­tem, where low-fre­quen­cy alpha and beta brain waves sup­press neur­al activ­i­ty asso­ci­at­ed with pre­dictable stim­uli, allow­ing neu­rons in sen­so­ry regions to high­light unex­pect­ed stim­uli via high­er-fre­quen­cy gam­ma waves. Bas­tos et al.4 found that inhibito­ry pre­dic­tions car­ried by alpha/beta waves typ­i­cal­ly flow back­ward through deep­er cor­ti­cal lay­ers, while exci­ta­to­ry gam­ma waves con­vey­ing infor­ma­tion about nov­el stim­uli prop­a­gate for­ward through super­fi­cial layers.

Recent AI exper­i­ments, par­tic­u­lar­ly those involv­ing OpenAI’s GPT‑4, unveil intrigu­ing par­al­lels with evo­lu­tion­ary learning.

In the mam­malian brain, sharp wave rip­ples (SPW-Rs) exert wide­spread exci­ta­to­ry influ­ence through­out the cor­tex and mul­ti­ple sub­cor­ti­cal nuclei5. With­in these SPW-Rs, neu­ronal spik­ing is metic­u­lous­ly orches­trat­ed both tem­po­ral­ly and spa­tial­ly by interneu­rons, facil­i­tat­ing the con­densed reac­ti­va­tion of seg­ments from wak­ing neu­ronal sequences6. This orches­trat­ed activ­i­ty aids in the trans­mis­sion of com­pressed hip­pocam­pal rep­re­sen­ta­tions to dis­trib­uted cir­cuits, there­by rein­forc­ing the process of mem­o­ry con­sol­i­da­tion7.

Recent AI exper­i­ments, par­tic­u­lar­ly those involv­ing OpenAI’s GPT‑4, unveil intrigu­ing par­al­lels with evo­lu­tion­ary learn­ing. Unlike tra­di­tion­al task-ori­ent­ed train­ing, GPT‑4 learns from exten­sive datasets, refin­ing its respons­es based on the accu­mu­lat­ed ‘expe­ri­ences’ – fur­ther­more pat­tern recog­ni­tion by GPTs par­al­lels pat­tern recog­ni­tion by lay­ers of neu­rons in the brain. This approach mir­rors the adapt­abil­i­ty observed in nat­ur­al evo­lu­tion, where organ­isms refine their behav­iours over time to bet­ter res­onate with their environment.

From Brain Waves to AI Frequencies

Draw­ing inspi­ra­tion from the archi­tec­ture of the brain, neur­al net­works in AI are con­struct­ed with nodes organ­ised in lay­ers that respond to inputs and then gen­er­ate out­puts. In the realm of human neur­al syn­chrony research, inves­ti­gat­ing the role of oscil­la­tions has proven to be a piv­otal area of inter­est. High-fre­quen­cy oscil­la­to­ry neur­al activ­i­ty stands out as a cru­cial ele­ment, demon­strat­ing its abil­i­ty to facil­i­tate com­mu­ni­ca­tion between dis­tant brain areas. A par­tic­u­lar­ly intrigu­ing phe­nom­e­non in this con­text is the theta-gam­ma neur­al code, show­cas­ing how our brains employ a dis­tinc­tive method of ‘pack­ag­ing’ and ‘trans­mit­ting’ infor­ma­tion, rem­i­nis­cent of a postal ser­vice metic­u­lous­ly wrap­ping pack­ages for effi­cient deliv­ery. This neur­al ‘pack­ag­ing’ sys­tem orches­trates spe­cif­ic rhythms, akin to a coor­di­nat­ed dance, ensur­ing the stream­lined trans­mis­sion of infor­ma­tion, and it is encap­su­lat­ed in what is known as the theta-gam­ma rhythm.

This per­spec­tive aligns with the con­cept of “neu­ro­mor­phic com­put­ing,” where AI archi­tec­ture is based on neur­al cir­cuit­ry. The key advan­tage of neu­ro­mor­phic com­put­ing lies in its com­pu­ta­tion­al effi­cien­cy, address­ing the sig­nif­i­cant ener­gy con­sump­tion chal­lenges faced by tra­di­tion­al AI mod­els. The train­ing of large AI mod­els, such as those used in nat­ur­al lan­guage pro­cess­ing or image recog­ni­tion, can con­sume an exor­bi­tant amount of ener­gy. For instance, train­ing a sin­gle AI mod­el can emit as much car­bon diox­ide as five cars over their entire lifes­pan8. More­over, researchers at the Uni­ver­si­ty of Mass­a­chu­setts, Amherst, found that the car­bon foot­print of train­ing deep learn­ing mod­els has been dou­bling approx­i­mate­ly every 3.5 months, far out­pac­ing improve­ments in com­pu­ta­tion­al effi­cien­cy9.

Neu­ro­mor­phic com­put­ing offers a promis­ing alter­na­tive. By mim­ic­k­ing the archi­tec­ture of the human brain, neu­ro­mor­phic sys­tems aim to achieve high­er com­pu­ta­tion­al effi­cien­cy and low­er ener­gy con­sump­tion com­pared to con­ven­tion­al AI archi­tec­tures10. For exam­ple, IBM’s TrueNorth neu­ro­mor­phic chip has demon­strat­ed sig­nif­i­cant orders of mag­ni­tude in ener­gy effi­cien­cy com­pared to tra­di­tion­al CPUs and GPUs11. Addi­tion­al­ly, neu­ro­mor­phic com­put­ing archi­tec­tures are inher­ent­ly suit­ed for low-pow­er, real-time pro­cess­ing tasks, mak­ing them ide­al for appli­ca­tions like edge com­put­ing and autonomous sys­tems, fur­ther con­tribut­ing to ener­gy sav­ings and envi­ron­men­tal sustainability.

Implications for society

In the realm of train­ing and skill devel­op­ment, syn­chro­nised AI has the poten­tial to per­son­alise learn­ing expe­ri­ences based on an employ­ee’s unique learn­ing curve, facil­i­tat­ing faster and more effec­tive skill acqui­si­tion. From a cus­tomer engage­ment stand­point, syn­chro­nised AI inter­faces might more pre­cise­ly under­stand and, in some cas­es, antic­i­pate user expec­ta­tions based on advanced behav­iour­al patterns.

For oper­a­tional effi­cien­cy, espe­cial­ly in sec­tors like man­u­fac­tur­ing or logis­tics, AI sys­tems work­ing in coor­di­na­tion with each oth­er can opti­mise process­es, reduce waste, and strength­en the sup­ply chain. This would lead to increased prof­itabil­i­ty, with an ever-met greater abil­i­ty for sus­tain­abil­i­ty con­sid­er­a­tions inte­grat­ed. In risk man­age­ment, syn­chro­nised AI sys­tems analysing vast datasets col­lab­o­ra­tive­ly might bet­ter pre­dict poten­tial risks or mar­ket down­turns, equip­ping busi­ness­es and oth­er organ­i­sa­tions to pre­pare or piv­ot before a cri­sis emerges to lim­it all relat­ed social and soci­etal impact. Like­wise, syn­chro­nised AI sys­tems could pro­vide insights for more effi­cient urban plan­ning and envi­ron­men­tal pro­tec­tion strate­gies. This could lead to bet­ter traf­fic man­age­ment, ener­gy con­ser­va­tion, and pol­lu­tion con­trol, enhanc­ing the qual­i­ty of life in urban areas.

In var­i­ous domains beyond busi­ness, deploy­ment of AI with a proso­cial ori­en­ta­tion holds immense poten­tial for the well-being of human­i­ty and the plan­et. Par­tic­u­lar­ly in health­care, syn­chro­ni­sa­tion between the human brain and AI sys­tems could ush­er in a rev­o­lu­tion­ary era for patient care and med­ical research. Recent stud­ies high­light the pos­i­tive impact of clin­i­cians syn­chro­nis­ing their move­ments with patients, there­by increas­ing trust, and reduc­ing pain. Extend­ing this con­cept to AI chat­bots or AI-enabled robot­ic care­givers that are syn­chro­nised with those under their ‘care’ holds the promise of enhanc­ing patient expe­ri­ence and improv­ing out­comes, as evi­denced by recent research indi­cat­ing that LLMs out­per­formed physi­cians in diag­nos­ing ill­ness­es, and patients pre­ferred their interaction.

In the edu­ca­tion­al domain, the inte­gra­tion of AI sys­tems with a focus on syn­chrony is equal­ly promis­ing. Research demon­strat­ed that syn­chro­nized brain waves in high school class­rooms were pre­dic­tive of high­er per­for­mance and hap­pi­ness among stu­dents12. This study under­scores the sig­nif­i­cance of neur­al syn­chrony in the learn­ing envi­ron­ment. By lever­ag­ing AI tutor­ing sys­tems capa­ble of detect­ing and respond­ing to stu­dents’ cog­ni­tive states in real-time, edu­ca­tion tech­nol­o­gy can poten­tial­ly repli­cate the pos­i­tive out­comes observed in syn­chro­nised class­room set­tings.  Incor­po­ra­tion of AI sys­tems that res­onate with stu­dents’ brain states has the poten­tial to cre­ate a more con­ducive and effec­tive learn­ing atmos­phere, opti­miz­ing engage­ment and fos­ter­ing pos­i­tive learn­ing outcomes.

Perspectives and Potential

The excite­ment sur­round­ing the prospects of brain-to-machine and machine-to-machine syn­chrony brings with it a set of para­mount con­cerns that neces­si­tate scruti­ny and that are all but tech­ni­cal. Data pri­va­cy emerges as a crit­i­cal appre­hen­sion, giv­en the inti­mate nature of neur­al infor­ma­tion being processed by these sys­tems. The eth­i­cal dimen­sions of such syn­chro­ni­sa­tion, par­tic­u­lar­ly in the realm of AI deci­sion-mak­ing, present com­plex chal­lenges that require care­ful con­sid­er­a­tion1314.

Expand­ing on these con­cerns, two over­ar­ch­ing issues demand height­ened atten­tion. First­ly, the preser­va­tion of human auton­o­my stands as a foun­da­tion­al prin­ci­ple. As we delve into the era of brain-machine syn­chrony, it becomes imper­a­tive to ensure that indi­vid­u­als retain their abil­i­ty to make informed choic­es. Avoid­ing sce­nar­ios where indi­vid­u­als feel coerced or manip­u­lat­ed by tech­nol­o­gy is cru­cial in uphold­ing eth­i­cal standards.

Sec­ond­ly, the ques­tion of equi­ty in access to these tech­nolo­gies emerges as a press­ing mat­ter. Cur­rent­ly, such advanced tech­nolo­gies are often cost­ly and may not be acces­si­ble to all seg­ments of soci­ety. This rais­es con­cerns about exac­er­bat­ing exist­ing inequal­i­ties15. A sce­nario where only cer­tain priv­i­leged groups can har­ness the ben­e­fits of brain-machine syn­chrony might deep­en soci­etal divides. More­over, the lack of aware­ness about these tech­nolo­gies fur­ther com­pounds issues of equi­table access16.

The inte­gra­tion of AI with human cog­ni­tion marks the thresh­old of an unprece­dent­ed era, where machines not only repli­cate human intel­li­gence but also mir­ror intri­cate behav­iour­al pat­terns and emo­tions. The poten­tial syn­chro­ni­sa­tion of AI with human intent and emo­tion holds the promise of redefin­ing the nature of human-machine col­lab­o­ra­tion and, per­haps, even the essence of the human con­di­tion. The out­come of har­mon­is­ing humans and machines will sig­nif­i­cant­ly impact human­i­ty and the plan­et, con­tin­gent upon the guid­ing human aspi­ra­tions in this pur­suit, and open oppor­tu­ni­ties for an advanced human-cen­tered AI expe­ri­ence, in a “Fusion Mode”, as coined in the “Arti­fi­cial Integri­ty” con­cept. This rais­es a time­less ques­tion, rever­ber­at­ing through the course of human his­to­ry: what do we val­ue, and why?

A cru­cial point to empha­sise is that the impli­ca­tions of syn­chro­nis­ing humans and machines extend far beyond the realm of AI experts; it encom­pass­es every indi­vid­ual. This under­scores the neces­si­ty to raise aware­ness and engage the pub­lic at every stage of this trans­for­ma­tive jour­ney. As the devel­op­ment of AI pro­gress­es, it is essen­tial to ensure that the eth­i­cal, soci­etal, and exis­ten­tial dimen­sions are shaped by col­lec­tive val­ues and reflec­tions, avoid­ing uni­lat­er­al deci­sions by Big Tech that may not align with the broad­er inter­ests of human­i­ty. What hap­pens next shapes our indi­vid­ual and col­lec­tive future. Get­ting it right is our shared responsibility.

1Picard, R. W. (1995). "Affec­tive Com­put­ing." MIT Media Lab­o­ra­to­ry Per­cep­tu­al Com­put­ing Sec­tion.
2Achter­berg, J., Akar­ca, D., Strouse, D.J. et al. Spa­tial­ly embed­ded recur­rent neur­al net­works reveal wide­spread links between struc­tur­al and func­tion­al neu­ro­science find­ings. Nature Machine Intel­li­gence 5, 1369–1381 (2023). https://doi.org/10.1038/s42256-023–00748‑9
3Lis­man, J. E., & Idiart, M. A. (1995). Stor­age of 7 +/- 2 short-term mem­o­ries in oscil­la­to­ry sub­cy­cles. Sci­ence, 267(5203), 1512–1515. [DOI: 10.1126/science.7878473]
4Bas­tos, A. M., Lundqvist, M., Waite, A. S., & Miller, E. K. (2020). Lay­er and rhythm speci­fici­ty for pre­dic­tive rout­ing. Pro­ceed­ings of the Nation­al Acad­e­my of Sci­ences, 117(49), 31459–31469. [https://​doi​.org/​1​0​.​1​0​7​3​/​p​n​a​s​.​2​0​1​4​8​68117]
5Buzsá­ki G. (2015). Hip­pocam­pal sharp wave-rip­ple: A cog­ni­tive bio­mark­er for episod­ic mem­o­ry and plan­ning. Hip­pocam­pus. 2015 Oct;25(10):1073–188. doi: 10.1002/hipo.22488. PMID: 26135716; PMCID: PMC4648295.
6O’Neill, J., Boc­cara, C. N., Stel­la, F., Schoe­nen­berg­er, P., & Csicsvari, J. (2008). Super­fi­cial lay­ers of the medi­al entorhi­nal cor­tex replay inde­pen­dent­ly of the hip­pocam­pus. Sci­ence, 320(5879), 129–133.
7Ego-Sten­gel, V. ; Wil­son, M. A. (2010). Dis­rup­tion of rip­ple-asso­ci­at­ed hip­pocam­pal activ­i­ty dur­ing rest impairs spa­tial learn­ing in the rat. Hip­pocam­pus, 20(1), 1–10.
8Strubell, E., Ganesh, A., McCal­lum, A. (2019). Ener­gy and pol­i­cy con­sid­er­a­tions for deep learn­ing in NLP. Pro­ceed­ings of the 57th Annu­al Meet­ing of the Asso­ci­a­tion for Com­pu­ta­tion­al Lin­guis­tics, 3645–3650. https://​doi​.org/​1​0​.​1​8​6​5​3​/​v​1​/​P​1​9​-1356
9Schwartz, R., Dodge, J., Smith, N. A., Over­ton, J., & Varsh­ney, L. R. (2019). Green AI. Pro­ceed­ings of the AAAI Con­fer­ence on Arti­fi­cial Intel­li­gence, 33, 9342–9350. https://​doi​.org/​1​0​.​1​6​0​9​/​a​a​a​i​.​v​3​3​i​0​1​.​3​3​0​19342
10Furber, S. B., Gallup­pi, F., Tem­ple, S., Plana, L. A. (2014). The SpiN­Naker Project. Pro­ceed­ings of the IEEE, 102(5), 652–665. https://​doi​.org/​1​0​.​1​1​0​9​/​J​P​R​O​C​.​2​0​1​4​.​2​3​04638
11Merol­la, P. A., Arthur, J. V., Alvarez-Icaza, R., Cas­sidy, A. S., Sawa­da, J., Akopy­an, F., … Mod­ha, D. S. (2014). A mil­lion spik­ing-neu­ron inte­grat­ed cir­cuit with a scal­able com­mu­ni­ca­tion net­work and inter­face. Sci­ence, 345(6197), 668–673. https://​doi​.org/​1​0​.​1​1​2​6​/​s​c​i​e​n​c​e​.​1​2​54642
12Dikker, S., Wan, L., Davidesco, I., Kaggen, L., Oost­rik, M., McClin­tock, J., … & Poep­pel, D. (2017). Brain-to- brain syn­chrony tracks real-world dynam­ic group inter­ac­tions in the class­room. Cur­rent Biol­o­gy, 27(9), 1375–1380.
13Dignum, V. (2018). Respon­si­ble Arti­fi­cial Intel­li­gence: How to Devel­op and Use AI in a Respon­si­ble Way. AI & Soci­ety, 33(3), 475–476. https://doi.org/10.1007/s00146-018‑0812‑0
14Flori­di, L., Cowls, J., Bel­tram­et­ti, M., Chati­la, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R.,Pagallo, U., Rossi, F., Schafer, B., Val­cke, P., & Vayena, E. (2018). AI4People—An Eth­i­cal Frame­work for a Good AI Soci­ety: Oppor­tu­ni­ties, Risks, Prin­ci­ples, and Rec­om­men­da­tions. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018‑9482‑5
15Diakopou­los, N. (2016). Account­abil­i­ty in Algo­rith­mic Deci­sion Mak­ing. Com­mu­ni­ca­tions of the ACM, 59(2), 56–62. https://​doi​.org/​1​0​.​1​1​4​5​/​2​8​44148
16Kostko­va, P., Brew­er, H., de Lusig­nan, S., Fot­trell, E., Goldacre, B., Hart, G., Koczan, P., Knight, P., Mar­soli­er, C., McK­endry, R. A., Ross, E., Sasse, A., Sul­li­van, R., Chay­tor, S., Steven­son, O., Vel­ho, R., Tooke, J., & Ross, E. (2016). Who Owns the Data? Open Data for Health­care. Fron­tiers in Pub­lic Health, 4. https://​doi​.org/​1​0​.​3​3​8​9​/​f​p​u​b​h​.​2​0​1​6​.​00107

Our world explained with science. Every week, in your inbox.

Get the newsletter