PHILIP K DICK’S DEPARTMENT OF PRE-CRIME A REALITY… BUT IT ...

Occasionally I've blogged about the increasing reliance of various law enfarcement ... er... enforcement... departments using computer algorithms not only to predict crimes, but also to profile those who might be inclined to become criminals. If it sounds like the science fiction fantasies of sci-fi master author Philip K. Dick, that's because it is. Dick's fiction has, of course, been the subject of numerous movies, from Bladerunner, to Total Recall, to the Tome Cruise movie Minority Report, which is all about psychic prediction of crime, and about people being arrested before they even commit the crime itself. OK, it's not computer prediction or profiling, but it might as well be, if this article shared by Mr. T.M. from Mother Jones is any indicator:

The Legal System Uses an Algorithm to Predict If People Might Be Future Criminals. It's Biased Against Blacks.

Before we get to the high octane speculation of the day, in order to have a backdrop for those speculations, consider these paragraphs from the article:

On a spring afternoon in 2014, Brisha Borden was running late to pick up her god-sister from school when she spotted an unlocked kid's blue Huffy bicycle and a silver Razor scooter. Borden and a friend grabbed the bike and scooter and tried to ride them down the street in the Fort Lauderdale suburb of Coral Springs.Just as the 18-year-old girls were realizing they were too big for the tiny conveyances—which belonged to a 6-year-old boy—a woman came running after them saying, "That's my kid's stuff." Borden and her friend immediately dropped the bike and scooter and walked away.

But it was too late—a neighbor who witnessed the heist had already called the police. Borden and her friend were arrested and charged with burglary and petty theft for the items, which were valued at a total of $80.

Compare their crime with a similar one: The previous summer, 41-year-old Vernon Prater was picked up for shoplifting $86.35 worth of tools from a nearby Home Depot store.

Prater was the more seasoned criminal. He had already been convicted of armed robbery and attempted armed robbery, for which he served five years in prison, in addition to another armed robbery charge. Borden had a record, too, but it was for misdemeanors committed when she was a juvenile.

Yet something odd happened when Borden and Prater were booked into jail: A computer program spat out a score predicting the likelihood of each committing a future crime. Borden—who is black—was rated a high risk. Prater—who is white—was rated a low risk.

Two years later, we know the computer algorithm got it exactly backward. Borden has not been charged with any new crimes. Prater is serving an eight-year prison term for subsequently breaking into a warehouse and stealing thousands of dollars' worth of electronics.

And as if that is not bad enough, the anti-black bias of these programs appears to be endemic, and one suspects, perhaps even innate or even deliberately pre-programmed:

In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.

  • The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
  • White defendants were mislabeled as low risk more often than black defendants.

Could this disparity be explained by defendants' prior crimes or the type of crimes they were arrested for? No. We ran a statistical test that isolated the effect of race from criminal history and recidivism, as well as from defendants' age and gender. Black defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind. (Read our analysis.)

The algorithm used to create the Florida risk scores is a product of a for-profit company, Northpointe. The company disputes our analysis.

In a letter, it criticized ProPublica's methodology and defended the accuracy of its test: "Northpointe does not agree that the results of your analysis, or the claims being made based upon that analysis, are correct or that they accurately reflect the outcomes from the application of the model.

Northpointe's software is among the most widely used assessment tools in the country. The company does not publicly disclose the calculations used to arrive at defendants' risk scores, so it is not possible for either defendants or the public to see what might be driving the disparity.

In a way, one is looking at the judicial equivalent of those computer voting algorithms, and in a season where the reports of voting fraud are coming in fast and furious, to the extent that many states are reporting such a pro-Darthillary voting bias, that they have moved to paper ballots. (Maybe that's because Darth Soros owns stock in companies providing such machines to twenty-two US states, by some estimates. But not to worry! We have the Department of Homeland (In)security and the UN supervising elections, because according to Fearless Leader, such a thing as voting fraud simply doesn't exist in the USSA). In this case, the bias against blacks, as Mother Jones is alleging, is clear and palpable, and it is being used for "sentencing guidelines." Of course, having such biased algorithms around is a pretty handy thing, especially when one is trying to fill up the private for-profit prisons that the same bunch of Neocons foisted on the country back in the late 80s and throughout the 1990s. (And I'll give you two guesses about which two famous political families were pushing this.) And like those computerized voting machines or the computerized individual adaptive assessments of Common Core, we're not allowed to see the algorithms because they are, of course, proprietary. All of this to my mind argues persuasively that it's high time for the American fascination with technology in the classroom, voting booth, and courtroom, to come to a screeching halt. It's time to end it. But there's another reason to be concerned...

...which brings me to my high octane speculation of the day. In the past few years the USSA has been under increasing cyber attacks from... well, from somewhere. We've seen attacks on Sony, on major banks, even on the US Federal Reserve, and most recently, an almost complete takedown of the the internet on the east coast via DOS attacks. I've also blogged recently about the fact that some US intelligence analysts are concerned that all of these cyber attacks are really reconnoitering operations designed to "map the internet," possibly for the ability to launch cyber attacks that can take the whole thing - or at least a significant segment of it - down. All of this suggests that these systems are far from secure. Hillary Clinton's campaign has strongly suggested that Russia is behind much computer voting hacking. Of course, the idea in itself is ludicrous, as Mr. Putin has recently pointed out. But by raising the issue, she has really suggested that none of these systems is secure. Want to rig an election? Hack it. Want to influence the outcome or sentencing stage of a trial? Hack it. Want to interfere with international financial clearing? Hack it. Want to influence a stock or commodities market and the high frequency trading algorithms that now dominates them? Hack it. Want to manipulate international currency markets? Hack it.

Raising the issue of markets indicates the problem: for if trading is increasingly done by high frequency trading algorithms, how can it be maintained that these markets are genuinely reflective of actual human market conditions? Similarly, how can a computer algorithm reflect human elements in sentencing? Or for that matter, in profiling? The Mother Jones article notes that questionnaires given to defendants forms a component of the process of computer evaluation and profile scoring. And of course, this means to a certain extent that the multiple choice standardized test has now sprouted in the justice system. It will only be a matter of time before computer monitors are put in the jury boxes to "guide and inform" jurors on their deliberations.  The biases inherent in the programs, as pointed out by Mother Jones, are also there for anyone to see who has examined their appearance in standardized tests.

It's high time to dethrone the computer, and put humanity back into trading, into teaching, into assessments, and into evaluations of guilt, innocence, and sentencing.

See you on the flip side...

 

 

Joseph P. Farrell

Joseph P. Farrell has a doctorate in patristics from the University of Oxford, and pursues research in physics, alternative history and science, and "strange stuff". His book The Giza DeathStar, for which the Giza Community is named, was published in the spring of 2002, and was his first venture into "alternative history and science".

38 Comments

  1. zendogbreath on November 1, 2016 at 2:09 pm

    Pertinent. And interesting. In spite and/or because of the moderation, the comments and commenters keep improving. Every comment here and the nature and side effects of comments here gets me circling around (like an odd kinda grey squirrel – perhaps one of those black ones in Moline, IL) to one fact that nature seems to always come back to what’s good.

    Don’t be evil so we don’t compete is apt enough. Interesting that the only way to profit cheaply is to be evil. Work at something anything honestly enough and more wealth than can be imagined results. To make a profit (and thus unwittingly limit oneself) we need to be evil – perhaps in proportion to the profit (and limitation).

    Consider, every war, every evil, every time someone’s a knuckle head, we’re all compelled to do something else. Problem, reaction, solution for sure does get some profit and maybe greater control – more like influence – for evil rottenchildren et al. Invariably they get further away from knowing themselves in the process though. And we get closer with or without their twisted help.



    • zendogbreath on November 1, 2016 at 2:22 pm

      remind anyone of the fremen conditioning on arrakis by virtue of the most punishing environment possible?

      sardaukar went through similar on salusa secundus.

      anyone interested in a little training in their not so spare time?

      http://www.icemanwimhof.com/innerfire

      http://russianmartialart.com/

      http://robbwolf.com/what-is-the-paleo-diet/

      http://originalstrength.net/

      https://www.youtube.com/watch?v=GXO9FrZ1N9s



      • zendogbreath on November 1, 2016 at 2:28 pm

        which brings to mind
        a couple experiments might be warranted.
        what if one of doc’s posts went un-moderated? anyone want to see what happens?
        curiouser and curiouser too how the ai develops. sure we’ve all seen how the nastier side immediately comes out in the programming. remember the ai adolescent female commenter online somewhere going all nazi rascist violent nasty on people? ai is by nature adaptive. evil is by nature a mistake. and clearly ya can’t limit nor eliminate evil with more or better or eviler evil. perhaps we have some basis for putin’s comments and apparently profound and sincere efforts toward making the world a more loving place?

        seems like the one scenario even the nastiest predictive programming cannot get away from is that intelligence is defined by goodness. artificial or not, intelligence that does not tend toward goodness is not so intelligent. beavis and butthead are by their nature self-limiting. whether they’re artificial or not.



        • zendogbreath on November 1, 2016 at 2:29 pm

          amazing. none of that got the mod.



  2. goshawks on October 31, 2016 at 9:02 pm

    “Your comment is awaiting moderation.” The AI or its helpers heard…
    goshawks – October 31, 2016 at 9:00 pm.



  3. goshawks on October 31, 2016 at 9:00 pm

    Two serious flags:
    1) “The algorithm used to create the Florida risk scores is a product of a for-profit company, Northpointe.”
    2) “[Northpointe] does not publicly disclose the calculations used to arrive at defendants’ risk scores, so it is not possible for either defendants or the public to see what might be driving the disparity.”

    ‘Outsourcing’ is a clever way to get whatever-you-want-done out to a ‘reachable’ entity. Some bureaucrat in the gov’t could say “No.” (It also permanently-destroys gov’t knowledge, in the RIFs that follow the outsourcing.) This is just another manifestation of directing the almighty-dollar for Control. Similarly, using a private company puts a ‘proprietary’ layer between the public and the power-seekers. It is not easy for the average-citizen to penetrate the corporate lawyers. Control, again…

    When I watched the first episodes of “Battlestar Galactica” Mark2, I just ‘got’ [grokked] the supreme danger of taking a human hand (and mind) out of the loop. In the series, virus-type attacks were first used to disable ALL the Colonial defenses – from bases to fighters. The human race was down to 50,000 survivors or so, before the remaining humans learned to ‘pull the plug’ on any electronic-controlled apparatus. It cost them speed and sophistication, but gained them absolute controllability. That ‘lesson’ just slammed itself into my consciousness…

    The society of the 50s was pretty-much immune to electronic hacking. Everything was analog or directly human-controlled. Today, most of our infrastructure is electronically-controlled – even down to vehicle braking/acceleration level, in some cases.

    I can’t keep from having the “BG” scenes come up in my mind. A hostile AI would have a field-day with OUR culture…



  4. OrigensChild on October 31, 2016 at 8:03 pm

    Oh great. While speaking with a friend last year about Jade Helm he told me it had the appearance of a war game exercise to test the capabilities of a new AI defense system. The idea was bandied about by some researchers at several sites. These stories sometimes make me respond, “You have to be kidding me.” By this perhaps I aught to explain what I mean by this.

    Software developers often start with a set of documents that constitute a formal business requirement. Having designed a few simple systems myself I quickly learned that a systems design is only as good as the customer’s expectations from the system to be developed. After reading your Common Core book I am left with a warm fuzzy opinion these customers are not only arrogant and functionally Illiterate, but they want systems that justify their paranoia of more intelligent people’s thoughts and attitudes. Designers are more than happy to accommodate that because they want the contracts for the next system–plus monies for future releases with upgrades to the algorithms producing more of the same. Like all entrepreneurs software designers want to maximize their business opportunities. It’s not their fault the Deep State is too foolish to understand how stupid their requirements truly are. Happy Halloween. ?



  5. Daryl Davis on October 31, 2016 at 1:40 pm

    From the Office of the Devil’s Advocate: This shift technologically toward an artificially-achieved socio-psychological omniscience recalls the myth of Pandora’s box — or, to mix metaphors, the apple of knowledge, which, once tasted, leaves no return to idyllic Eden. The box now stands open; the apple savored: our tech race is long underway. Who would dare blind his own eye, or turn his other from the remaining full-sighted hoards, to live as though in Eden still? (Ask the Tibetans how this story goes.)

    Certainly, it’s disturbing when sociopath binary robber barons bend what might have been worthy research efforts toward their own childish ends. Still, it remains entirely appropriate that we, as a collective Earth society, forsake political correctness, throw wide the mystery box, bite deep the forbidden apple, and best define our strengths and our weaknesses, not only as individuals, or between races, but as a species — i.e., vis a vis alien species, who, according to many accounts, engineer or inherit formidable mental abilities we altogether lack. Yet if and when such Fibonacci algorithms proved poor at prediction, of course tweak them, again and again, with the worthy goal of an abiding, self- and species awareness. Or shall we instead continue to investigate reality by focusing solely upon “the observed”, forever ignoring ourselves, “the observer”, in spite of an apparent relationship between the two?

    But this touches upon a secondary consideration — that of artificial intelligence. I have no doubt it’s possible today to create Terminator clones — which is to say, AI all but devoid of the human subtleties requisite to a successful navigation of our often irrational yet largely still orderly society. Witness the self-driven vehicle. The greatest obstacle both to its success and to the success of AI generally is the reality that human rules don’t generally make sense, but even when they do, they’re rarely applied universally, and most humans afford themselves an arbitrary freedom to bend them anyway. Beyond this, and getting back to the primary point, the very creators of AI are no doubt woefully blind to their own blindness — to the existence of and to the interplay between theirs and all our all-too-human variables: thus the desirability of and, like as not, the necessity of maximizing socio-psychological, self- and species awareness.

    [Note: This in no way represents an excuse for, let alone an endorsement of, the employment of such algorithms prior to establishing error rates approaching zero — far lower than those achieved, for example, by current human drivers and human juries — which likely would never be achieved. Nor is such an outside-in, empirical analysis of humans anything but the poorest substitute for inside-out, subjective analysis. It just so happens that most humans fail miserably at the latter.]



    • goshawks on November 1, 2016 at 2:32 am

      “The greatest obstacle both to its success and to the success of AI generally is the reality that human rules don’t generally make sense…”

      I am reminded of certain species of squirrels that have evolved a supposedly ‘random’ quirk. They start off in one direction, then veer radically-off in another, and then do it again. A tree-to-tree trip might take twice the ‘optimum’ time. However, enter the flying predator. Scientists hypothesize that the rodents evolved this ‘random’ pattern to throw-off goshawks (grin) in the last phases of stoops. A survival strategy…

      Now, what if ‘biologicals’ (humanoids) evolved the same type of behavior relative to ‘electricals’ (the AI/Terminator types)? What if ‘irrational’ behaviors evolved much like the squirrel behaviors – as a survival strategy? What we call ‘doesn’t-make-sense moves’ might be the genetic-remnants of ancient Biological vs Electrical warfare. Randomization, so to speak. A fleet that, logically, should be ‘here’ shows up – totally illogically – ‘there’…



      • goshawks on November 1, 2016 at 4:15 am

        And actually, ‘randomization’ behavior is just from an earth-plane perspective. Higher powers, such as true intuition (a subset of clairvoyance), could be judged as irrational behavior by those without such ‘skills’…



        • Daryl Davis on November 1, 2016 at 8:13 am

          Excellent points, goshawks. I don’t actually consider it possible to design algorithms accurately predictive of individual human behavior, precisely because we respond so very differently to environmental changes — with strategies including “irrational” intuition and zany “squirreliness” (Good luck, Tesla.) Nor would the algorithms’ designers themselves be fully aware of their own such irrational strategies. In any case, it seems this limitation has been recognized, as research aims instead to directly read the electrical impulses of the brain, in real time, similarly to a lie detector yet affording them a real-time control.

          But a fuller examination of human motivations, basically a DNA-map style inventorying of risk strategies, rational or otherwise, and their identification among individuals and between cultures would undoubtedly heighten self-awareness, cultural awareness and species awareness. The nightmare scenario of course would be imposition of woefully faulty algorithms. But real and pertinent differences exist between individuals (aggression/empathy), cultures (matriarchy vs. patriarchy/dependency vs. tyranny), and species (number and nature of senses/group think vs. individual initiative.) And cutting through our politically-correct blinders in order to distinguish strategies adaptive and functional in an orderly society from those entirely not so would accelerate human progress at every level.

          Awareness itself is an inherent positive, no matter its having been turned toward evil ends, as awareness compels awareness. The alternative, as stated, is to keep the blinders on and continue pretending we haven’t already tasted the apple: remain complacent Lotus eaters (another age-old, illustrative metaphor), never imagining that others are not so blind.



      • Roger on November 1, 2016 at 9:48 am

        Those Grey squirrels now litter the roadways because they cant make up their mind which way to run. Good thing they breed year around and have lots of babies or there wouldn’t be any left. I probably wipe one out about every month even though I’m not going all that fast and they have plenty of time to run out of my way. Their inability to adapt is increasing the vulture population but I don’t see more goshawks.



  6. marcos toledo on October 31, 2016 at 11:03 am

    Nothing new the Western Europeans worship of technology mistaking it for civilization. Taking the easy way out let the AI do the thinking-judging we have more important things to like speculating-gambling and go running to the government when we lose. One shutters if they will do this to Afro-Americans what would these departments of pre-crime do to Native American defendants when they come up for sentencing.



  7. Robert Barricklow on October 31, 2016 at 10:36 am

    How Trite…
    Under moderation again.



    • Joseph P. Farrell on October 31, 2016 at 11:11 am

      Folks I do not control how the algorithms flag things for moderation. I try to catch things as QUICKLY as I can.



      • Robert Barricklow on October 31, 2016 at 12:07 pm

        I more mad at myself here forgetting that “C” word.
        It’s COMparable to PETs and I just TION it.
        The4 C-word is not…
        Clowns, but… read ALL CAPS above.



        • Robert Barricklow on October 31, 2016 at 12:08 pm

          They say Don’t Do Evil
          because
          The don’t want ANY C-word.



      • OrigensChild on October 31, 2016 at 7:05 pm

        Don’t sweat over it Dr. Farrell. As the owner of your site, if you do not understand why things go to moderation as often as they do it does not speak well of the purveyors of this tech. AI doesn’t mean artificial intelligence to me. It means absent intuition. Sometimes I wonder if a random number generator could capture this stuff better than the AI system.



        • OrigensChild on October 31, 2016 at 8:10 pm

          Having said that my comment awaits moderation. It just proves my point. A random number generator would stand a better chance of capturing a post that truly needs moderation than this system.



          • goshawks on October 31, 2016 at 9:27 pm

            OC, “Hear, Hear!”



    • Roger on October 31, 2016 at 11:39 am

      Robert, it must be your Global data base moderation score on this topic. The moderation algorithms compute that you are likely to post a politically incorrect opinion on this particular topic based upon scores assessed to previous comments of yours on similar topics on other forums. The pre-moderation AI has a 99% success rate of pre-predicting politically unacceptable remarks that expose or undermine the chosen narratives and allowable responses. Just kidding, but who knows if this is really going on or will be the norm in the future. I suspect I’ve already been put under investigation several times thanks to these bogus pre-crime justifications to spy and interfere with my personal and private business.



      • Robert Barricklow on October 31, 2016 at 3:47 pm

        Roger,
        As they say in Rome
        [see below, as moderator may speak Italian:)



        • Robert Barricklow on October 31, 2016 at 3:50 pm

          As we say in Rome/so’ cazzi amari…
          and to the AI bots/Lei mi fa pippa!



          • zendogbreath on November 4, 2016 at 12:34 am

            wow robert. i’m gonna guess the bots not so hot on italien. and neither is google translate.



      • Vomito Blanco on October 31, 2016 at 6:08 pm

        I notice the algorithms don’t seem to screen for anti-semitic comments as vigilantly as was done in the past. Is it possible, that AI has become anti-Semitic as well?



  8. Robert Barricklow on October 31, 2016 at 10:35 am

    We don’t need trails anymore.
    We have the computer results before the crime is committed.
    We they tell us/Don’t Do Evil;
    they mean,
    We don’t want ANY competition.



  9. Neru on October 31, 2016 at 10:18 am

    Wauw, how easy it is to divide and rule and so many topics to choose from to do it these days. Todays society is especially well set up for it.

    Negative opportunism that hurts others is not race or gender specific. For parasites we are all livestock and will be used as slave labor to do their bidding.
    And worse no problem in finding traitors to assist from the targeted demografic to do it!!!

    All races of any gender could use a brush up what it means to be “humane” and act accordingly and computers and technological marvels are not going to teach us this important life lesson.



  10. Vomito Blanco on October 31, 2016 at 9:34 am

    Black people truly can not catch a break. Now even the computers are racist.



    • Vomito Blanco on October 31, 2016 at 9:36 am

      Sad how their North American experience has gone full circle– from slave, to second class citizen (plantation and Democratic Party serfs) to slaves again. And as a lack of empathy builds for black people in the US, especially among the growing populations of Latinos and Asians, and as the oligarchs no longer find them useful in their machinations, I fear the plight of black people will only worsen as more will be forced into the prison slavery complex or permanently confined to their open air ghetto gulags, where their only interaction with government services will be the at the abortion and vaccine clinics, the Monsanto food depot, or on the receiving end of drone and robot law enforcement. The blacks who are allowed to leave on Hollywood or professional or college sports visas, will only be allowed to leave if they are chipped or even electronically collared. These “useful” blacks may even be allowed to retire outside their ghettos in more comfortable concentration camps, if they have been dutiful and obedient entertainers. I suspect this will all be the outcome of some kind of racial uprising where black people lose or white people finally rebel against cultural marxism but take it too far.

      If black people were smart, then not only should they protect themselves from toxic food, toxic pharmaceuticals, toxic culture, and toxic politicians, but they should also begin a secret scientific program in every inner city of gene modification targeting the production of melatonin. If they can reduce the amount of melatonin their pineal gland produces, they could quickly change their skin color and become white skinned thereby blending in with the white population. This would come at some cost as tampering with the pineal gland could diminish their natural intuition which has been so advantageous to blacks on sports fields and dance floors. They would no longer be able to naturally feel their dance or sports moves but would have to intellectualize these movements as most white people awkwardly do now. Moreover, their circadian rhythm would be altered thereby putting them out of tune with Earth’s natural resonance (But possibly making them fit to live on Mars.) By creating this army of “white” sleeper agents, African-Americans could potentially protect themselves from this impending destruction and maybe even take over. This most likely would have to come with a clandestine ideology which promotes their supremacy and their divine right to rule over all other substandard humans, as well as usurping the ability to print the nation’s currency.

      The only real flaw in this plan is that once some black people become “white” they will forget all about their black kinsmen in the ghettos, as they get a taste for a better quality white lifestyle, and will never bother to return and help them, like so many successful black people do today.



      • Vomito Blanco on October 31, 2016 at 9:55 am

        On the subject of hacking, also of note is that drones can be hacked. This may be of some benefit to all of Hillary Clinton’s political enemies, especially the FBI agents who continue to pursue the email investigation. Of all her impending titles, I think the one Hillary will cherish the most is “Drone Commander.” Lets not forget her recent statement on the campaign trail:

        “Give me control of the predator drone fleet, and I care not who not makes the laws, counts the votes or prints the money.”

        Let’s just pray she has an easy password to hack the drones like “I love Huma” or “drone password” or “Hillary’s favorite toys.”



      • Roger on October 31, 2016 at 12:04 pm

        Sadly, there is likely no intentional racism programmed into the algorithm. The carear criminal probably grew up in a wealthy low crime, high employment neighborhood and had excellent grades and possibly a collage education. The one time criminal out of desperate urgency likely got poor grades in school and grew up in a high crime, low employment environment and was a possible high school drop out. Prisons are built to ensnare and put to use for some kind of profit and use the predicted unemployable and lowly educated rabble. They are determined to exploit all of us to the max one way or another for their enrichment, entertainment and safety regardless of race.



    • justawhoaman on October 31, 2016 at 10:40 am

      GIGO… garbage in… garbage out. True since the days of the punch card.



    • Don B on October 31, 2016 at 12:44 pm

      It started going down after MLK was gunned down like a mad dog. I know, he wasn’t a perfect human being but neither are any of us. The fbi reports, yeah some things never change huh. db



  11. Kahlypso on October 31, 2016 at 5:30 am

    Moderation for just talking about the Black Knight? Now I know he’s listening 🙂



    • Joseph P. Farrell on October 31, 2016 at 9:53 am

      … I cannot spend all my time in front of the computer…I try to get to comments as soon as I can that have been flagged for moderation…



      • Vomito Blanco on October 31, 2016 at 10:08 am

        I just wish I knew what words/topics that need to be avoided if one wants to escape the blackhole of moderation. With all due respect, and I can’t begin to put myself in your shoes, but it just seems that on a website such as this that is primarily catering to well read, well educated and/or well reasoned individuals, moderation shouldn’t be necessary and is only adding unnecessary burden to your website management. Forgive me if I am wrong.



        • goshawks on October 31, 2016 at 9:09 pm

          VB: “Hear, hear!”



  12. Kahlypso on October 31, 2016 at 5:30 am

    Phillip K Dick was apparently in contact with the Black Knight.



Help the Community Grow

Please understand a donation is a gift and does not confer membership or license to audiobooks. To become a paid member, visit member registration.

Upcoming Events