Science and philosophy proceed most efficiently and effectively through open and peer-reviewed processes — not hidden in closed (and, others have said, close-minded echo chamber) communities.
The Fourth Conference on Artificial General Intelligence (AGI-11) will be held at Google headquarters in Mountain View, California from August 3 to 6 this summer — a convenient location for many of you (particularly when compared to Switzerland or the east coast). There is, of course, a website and a call for papers.
Shane Legg made this observation about last year’s conference
I think the most important missing ingredient of the conference was a lack of discussion about AGI safety issues. From what I recall, during the main conference presentations Mark Waser was the only person to directly take on the topic. During the final workshop session Roko Mijic appeared out of the blue and gave a talk on Yudkowsky style Friendly AI. A show of hands revealed that while half the audience had heard of SIAI, few had heard of CEV. Roko then kicked off his talk by describing the creation of an AGI as likely being worse than the Sicilian mafia combined with grey goo. It’s hard to say what the audience who’d never encountered CEV etc. were thinking at this point, but I’d hazard a guess that they’d written him off as some kind of paranoid crackpot. In any case, what did become clear is that a sizable part of the AGI community is not familiar with FAI thinking.
The number of other peer-reviewed scientific conferences where Friendly AI-related papers are accepted as on-topic is relatively small. I am only aware of the European and International Association for Computing and Philosophy (IACAP) conferences (which also has a March 15 call for papers for IACAP11 this summer at Aarhus University from July 4th to 6th) and the Biologically Inspired Cognitive Architectures conferences held just outside Washington, DC every fall.
Reviewing Joshua Fox’s Informal bibliography on the Intelligence Explosion and Friendly AI clearly reveals the paucity of good peer-reviewed references. In particular, it is noteworthy (and scientists DO notice) that there are no peer-reviewed papers that propose or defend Friendly AI or CEV.
My challenge is quite simple: Submit to AGI-11 (or IACAP-11) and remedy that lack (or propose some alternate solutions or beginnings or pathways to solutions). I’ll be submitting to both and am willing to assist anyone else in doing so as well. Instead of complaining about how “nobody is listening or concerned”, why not take effective action? Until positions are advanced via the normal processes of science and philosophy (and society), you have no one to blame but yourself if nobody seems to care.
ADDITION: I would have cross-posted this to LessWrong but you aren’t permitted to post unless you have sufficient karma (which, in my case, is normally sacrificed to said posting ;-))