ERIC’S TECH TALK: Surviving the surveillance state

An artist’s rendering of a Neanderthal.

by Eric W. Austin

Let me present you with a crazy idea, and then let me show you why it’s not so crazy after all. In fact, it’s already becoming a reality.

About ten years ago, I read a series of science-fiction novels by Robert J. Sawyer called The Neanderthal Parallax. The first novel, Hominids, won the coveted Hugo Award in 2003. It opens with a scientist, Ponter Boddit, as he conducts an experiment using an advanced quantum computer. Only Boddit is not just a simple scientist, he’s a Neanderthal living on a parallel Earth where the Neanderthal survived to the modern era, rather than us homo sapiens.

Contrary to common misconception, the Neanderthal were not our progenitors, but a species of human which co-existed with us for millennia before mysteriously dying off about 28,000 years ago, during the last ice age. Based on DNA evidence, modern humans and Neanderthal shared a common ancestor about 660,000 years in the past.

Scientists debate the causes of the Neanderthal extinction. Were they less adaptable to the drastic climate changes happening at the time? Did conflict with our own species result in their genocide? Perhaps, as some researchers have proposed, homo sapiens survived over their Neanderthal cousins because we had a greater propensity for cooperation.

In any case, the traditional idea of Neanderthal as dumb, lumbering oafs is not borne out by the latest research, and interbreeding between Neanderthal and modern humans was actually pretty common. In fact, those of us coming from European stock have received between one and four percent of our DNA from our Neanderthal forebearers.

The point I’m trying to make is that it could as easily have been our species, homo sapiens, which died off, leaving the Neanderthal surviving into the modern age instead.

This is the concept author Robert Sawyer plays with in his trilogy of novels. Sawyer’s main character, the Neanderthal scientist Ponter Boddit, lives in such an alternate world. In the novel, Boddit’s quantum experiment inadvertently opens a door to a parallel world — our own — and this sets up the story for the rest of the series.

The novels gained such critical praise at the time of their publication not just because of their seamless weaving of science and story on top of a clever premise, but also because of the thought Sawyer put into the culture of these Neanderthal living on an alternate Earth.

The Neanderthal, according to archeologists, were more resilient and physically stronger than their homo sapien cousins. A single blow from a Neanderthal is enough to kill a fellow citizen, and in consequence the Neanderthal of Sawyer’s novels have taken drastic steps to reduce violence in their society. Any incident of serious physical violence results in the castration of the implicated individual and all others who share at least half his genes, including parents, siblings and children. In this way, violence has slowly been weeded out of the Neanderthal gene pool.

A comparison between human (left) and Neanderthal (right) skulls.

About three decades before the start of the first novel, Hominids, a new technology is introduced into Neanderthal society to further curb crime and violence. Each Neanderthal child has something called a “companion implant” inserted under the skin of their forearm. This implant is a recording device which monitors every individual constantly with both sound and video. Data from the device is beamed in real-time to a database dubbed the “alibi archive,” and when there is any accusation of criminal conduct, this record is available to exonerate or convict the individual being charged.

Strict laws govern when and by whom this information can be accessed. Think of our own laws regarding search and seizure outlined in the Fourth Amendment to the Constitution.

By these two elements — a companion implant which monitors each citizen 24/7, and castration as the only punishment for convicted offenders — violence and crime have virtually been eliminated from Neanderthal society, and incarceration has become a thing of the past.

While I’m not advocating for the castration of all violent criminals and their relations, the idea of a companion implant is something that has stuck with me in the years since I first read Sawyer’s novels.

Could such a device eliminate crime and violence from our own society?

Let’s take a closer look at this idea before dismissing it completely. One of the first objections is about the loss of privacy. Constant surveillance? Even in the bathroom? Isn’t that crazy?

Consider this: according to a 2009 article in Popular Mechanics magazine, there are an estimated 30 million security cameras in the United States, recording more than four billion hours of footage every week, and that number has likely climbed significantly in the nine years since the article was published.

Doubtless there’s not a day that goes by that you are not captured by some camera: at the bank, the grocery store, passing a traffic light, going through the toll booth on the interstate. Even standing in your own backyard, you are not invisible to the overhead gaze of government satellites. We are already constantly under surveillance.

Add to this the proliferation of user-generated content on sites like Facebook, Twitter and Instagram. How often do you show up in the background of someone else’s selfie or video podcast?

Oh, you might say, but these are random bits, scattered across the Internet from many different sources. We are protected by the very diffusion of this data!

To a human being, perhaps this is true, but for a computer, the Internet is one big database, and more and more, artificial intelligences are used to sift through this data instead of humans.

Take, for example, Liberty Island, home of the Statue of Liberty. A hot target for terrorists, the most visited location in America is also the most heavily surveilled. With hundreds of cameras covering every square inch of the island, you would need an army of human operators to watch all the screens for anything out of place. This is obviously unfeasible, so they have turned to the latest in artificial intelligence instead. AI technology can identify individuals via facial recognition, detect if a bag has been left unattended, or send an alert to its human operators if it detects anything amiss.

And we are not only surveilled via strategically placed security cameras either. Our credit card receipts, phone calls, text messages, Facebook posts and emails all leave behind a digital trail of our activities. We are simply not aware of how thoroughly our lives are digitally documented because that information is held by many different sources across a variety of mediums.

For example, so many men have been caught in their wandering ways by evidence obtained from interstate E-ZPass records, it’s led one New York divorce attorney to call it “the easy way to show you took the off-ramp to adultery.”

And with the advancements in artificial intelligence, especially deep learning (which I wrote about last week), this information is becoming more accessible to more people as computer intelligences become better at sifting through it.

We have, in essence, created the “companion implant” of Sawyer’s novels without anyone ever having agreed to undergo the necessary surgery.

The idea of having an always-on recording device implanted into our arms at birth, which watches everything we do, sounds like a crazy idea until you sit down and realize we’re heading in that direction already.

The very aspect that has, up ‘til now, protected us from this constant surveillance — the diffusion of the data, the fact that it’s spread out among many different sources, and the great quantity of data which makes it difficult for humans to sift through — will soon cease to be a limiting factor in the coming age of AI. Instead, that diffusion will begin to work against us, since it is difficult to adequately control access to data collected by so many different entities.

A personal monitoring device, which records every single moment of our day, would be preferable to the dozens of cameras and other methods which currently track us. A single source could be more easily protected, and laws governing access to its data could be more easily controlled.

Instead, we have built a surveillance society where privacy dies by a thousand cuts, where the body politic lies bleeding in the center lane of the information superhighway, while we stand around and complain about the inconvenience of spectator slowing.

Eric W. Austin writes about technology and community issues. He can be reached by email at ericwaustin@gmail.com.

ERIC’S TECH TALK – Deepfake: When you can’t believe your eyes

A screenshot from the fake Obama video created by researchers at the University of Washington.

by Eric W. Austin

Fake news. Fake videos. Fake photos. The way things are heading, the 21st century is likely to be known as the Fake Century, and it’s only going to get worse from here.

About a year ago, I came across a short BBC News report. It talked about an initiative by researchers at the University of Washington to create a hyper-realistic video of President Obama saying things he never said. On Youtube, they posted a clip of the real Obama alongside the fake Obama the researchers had created. I couldn’t tell the difference.

Welcome to the deepfake future.

Deepfake” is probably not a term you’ve heard a lot about up ‘til now, but expect that to change over the next few years. The term is derived from the technology driving it, deep learning, a branch of artificial intelligence emphasizing machine learning through the use of neural networks and other advanced techniques. When Facebook tags you in a photo uploaded by a friend, that’s an example of deep learning in action. It’s an effort to replicate human-like information processing in a computer.

Artificial intelligence is not just getting good at recognizing human faces; it’s becoming good at creating them, too. By feeding an A.I. thousands of images or video of someone, for example a public figure, the computer can then use that information to create a new image or video of the person that is nearly indistinguishable from the real thing.

No, Einstein never went bicycling near a nuclear test. This photo is fake.

Of course, this sort of fakery has been around for a long time in photography. Do an unfiltered Google image search for any attractive female celebrity, and you’re likely to find a few pictures with the celebrity’s head photoshopped onto the body of a porn actress in a compromising position. Search for images of UFOs or the Lock Ness Monster, and you’ll find dozens of fake photos, many of which successfully fooled the experts for years.

But what we’re talking about here is on a completely different level. Last year I wrote about a new advancement in artificial intelligence allowing a computer to mimic the voice of a real person. Feed the computer 60 seconds of someone speaking and that computer can re-create their voice saying absolutely anything you like.

Deepfake is the culmination of these two technologies, and when audio and video can be faked convincingly using a computer algorithm, what hope is there for truth in the wild world of the web?

If the past couple years have taught us anything, it’s that there are deep partisan divides in this country and each side has a different version of the truth. It’s not so much a battle of political parties as it is combat between contrasting narratives. It’s a war for belief.

Conspiracy theories have flourished in this environment, as each side of the debate is all too willing to believe the worst of the other side — whether it’s true or not. I have written several times about the methods Russia and others have used to influence the U.S. electorate, but it’s this willingness to believe the worst about our fellow Americans that is most often exploited by our adversaries.

Communist dictator Joseph Stalin was infamous for destroying records and altering images to remove people from history after they had fallen out of favor with him.

Likewise, when the Roman sect of Christianity gained ascendancy in the early 4th century CE, they set about destroying the gospels held sacred by other groups. This was done in order to paint the picture of a consistently unified church without divisions (“catholic” is Latin for “universal”).

In both these cases, narratives were shaped by eliminating any information that contradicted the approved version of events. However, with the advent of the Internet and a mostly literate population, that method of controlling the narrative just isn’t possible anymore. Instead, the technique has been adjusted to one which floods the public space with so much false and misleading information that even intelligent, well-meaning people have trouble telling the difference between fact and fiction.

If, as Thomas Jefferson once wrote, a well-informed electorate is a prerequisite to a successful democracy, these three elements – our willingness to believe the worst of our political opponents, the recent trend of controlling the narrative by flooding the public consciousness with misinformation to obscure the truth, and the advancements of technology allowing this fakery to flourish and spread – are combining to create a challenge to our republic like nothing we’ve experienced before.

What can you do about the coming deepfake flood? Let me give you some advice I take myself: Make sure you rely on a range of diverse and credible sources. Regularly read sources with a bias different from your own, and stay away from those on the extreme edges of the political divide. Consult websites like AllSides.com or MediaBiasFactCheck.com to see where your favorite news source falls on the political spectrum.

We have entered the era of post-truth politics, but that doesn’t mean we have to lose our way in the Internet’s labyrinth of lies. It means we need to develop a new set of skills to navigate the environment in which we now find ourselves.

The truth hasn’t gone away. It’s just lost in a where’s Waldo world of obfuscation. Search hard enough, and you’ll see it’s still there.

Eric W. Austin writes about technology and community issues. He can be reached by email at ericwaustin@gmail.com.

ERIC’S TECH TALK – Kids and social media: What to know

by Eric W. Austin

Probably the most invasive aspect of the technological revolution in the last two decades is the ubiquity of social media in our daily lives. From entire articles in the New York Times devoted to the 280 characters tweeted by the president during his morning absolution, to the fact that Facebook is the most popular source of news for millions of Americans, it’s impossible to escape the influence of social media.

Children born after the new millennium have grown up with a daily digest of this bite-sized brain spill. How is it affecting them, and how has their use of it changed over time?

A new study released this year tries to answer some of those questions. This survey of teen social media use was sponsored by Common Sense Media, a nonprofit which describes itself as “the leading independent nonprofit organization dedicated to helping kids thrive in a world of media and technology.”

The survey of 1,141 adolescents between the ages of 13 and 17 is a follow-up to an earlier study the organization did of 1,000 teens back in 2012. Each survey was conducted on a separate and random cross-section of teens of different ethnicities, socio-economic backgrounds and geographic locations, proportional to their representation in the U.S. population.

Common Sense Media aims to “empower parents, teachers, and policymakers by providing unbiased information, trusted advice, and innovative tools to help them harness the power of media and technology as a positive force in all kids’ lives.”

Their website is very well organized, and I highly recommend it to parents and teachers trying to navigate the increasingly complex web of social media services available online.

According to the new study, although the number of teens who use social media, about 81 percent, hasn’t changed from the survey done six years ago, other factors, such as frequency of use, have changed significantly.

courtesy of the Carlson Law Firm

In 2012, only 34 percent of teens surveyed said they use social media more than once daily. Today that number has more than doubled, with 70 percent now saying they access social media multiple times per day. In fact, 34 percent of teens report using social media several times an hour, and 16 percent admitted to using it “almost constantly.”

Some of this increase, according to the researchers, may have to do with a substantial increase in teens’ access to mobile devices. Teens with smartphones has more than doubled in the last six years, from 41 to 89 percent and – if you include those who access social media from a non-phone device, such as an iPad or Android tablet – that number rises to 93 percent.

Facebook as the dominant social media site has also declined dramatically in the six years since the last survey was taken. In 2012, 68 percent of teens listed Facebook as their primary social media site. In the latest study from 2018, that number has dropped to only 15 percent, with Snapchat rising to the top with 41 percent, and Instagram at 22 percent.

One 16-year old respondent, when asked who she still talks to on Facebook, responded, “My grandparents.”

Along with organizing teen respondents according to household income, ethnicity, age and gender, the survey administrators also rated each teen on something called a social-emotional well-being (SEWB) scale. This “11-item scale measures attributes related to SEWB in adolescents as identified by the National Institute for Clinical Excellence (such as happiness, depression, loneliness, confidence, self-esteem, and parental relations).”

Teens were presented with a series of statements regarding these topics, and asked whether they thought the statements were “a lot,” “somewhat,” “not too much,” or “not at all” like them. Then, each teen was assigned to one of three groups depending on their responses: the high end of the scale (19 percent of respondents), the medium group (63 percent), or the low end of the SEWB scale (17 percent).

There were significant differences between groups organized on this SEWB scale. For example, nearly half of those surveyed on the low end of this scale, 46 percent, said social media is “extremely” or “very” important to their lives, compared to just 32 percent of those rated at the high end of the scale.

While an overwhelming majority of teens surveyed indicate social media has had a positive impact on how they feel about themselves, those on the lower end of the SEWB scale were more likely to report a negative experience. For instance, nearly 70 percent of those on the low end report feeling left out or excluded on social media, compared with just 29 percent at the high end. In addition, 43 percent on the low end reported an experience of cyberbullying online, while only 5 percent in the upper group related similar experiences.

By a high majority, however, even those on the lower end of the SEWB scale report that social media has had a greater positive effect than a negative one on their lives. In fact, according to the survey, those at the lower end are actually more likely to say social media has a generally positive effect on them.

There have been some other important changes over the last six years as well. Whereas in 2012 “face-to-face” was still the preferred method of interacting with their peers, the most recent study has seen the number of teens preferring face-to-face contact drop from 49 percent to 32 percent. Texting is now the most popular method of communicating, with 35 percent of teens listing it as their number one way to connect with friends.

And teens seem ahead of the curve when looking at the dangers of social media addiction. Fifty-four percent of respondents concede that social media “often distracts me when I should be paying attention to the people I’m with,” up from 44 percent in 2012. As well, nearly half (44 percent) admit to being frustrated with friends for using their devices when they are hanging out together.

It should also be noted that 33 percent of teens expressed a desire that their parents spend less time on their own devices, a 12-point increase from 2012.

And with all the controversy about the power big tech companies have to sway public opinion, kids have already figured this out, with 72 percent of teens thinking tech companies manipulate users to get them to spend more time on their platforms.

Teens also expressed a mixed record on self-regulation when it comes to putting down their devices at important times, with 56 percent saying they do so: for meals (42 percent), visiting with family (31 percent), or doing homework (31 percent).

The teens surveyed were also asked about cyberbullying. 13 percent admitted to having “ever” experienced bullying online. Nine percent said they had been bullied online “many” or “a few times”, with the rest saying “once or twice”. Twenty-three percent of teens surveyed claim to have tried to help a classmate who was cyberbullied, either by talking to the individual, reporting the situation to an adult, or posting positive comments online to counter negative content.

According to the study, the most important aspect of social media for teens is the ability it gives them to express themselves creatively. More than one in four, 27 percent, of respondents said social media was an “extremely” or “very” important avenue for creative self-expression.

In an open-ended comment section on the survey, one 17-year old girl wrote that “[s]ocial media allows me to have a creative outlet to express myself,” while a 14-year old African-American girl said, “I get to share things I make.”

Several conclusions were highlighted by the researchers in their report. Overall, teens seem to find social media a generally positive addition to their lives, and there doesn’t seem to be any clear link between increases in depression rates and social media use.

Also, teens seem extremely savvy when it comes to the addictive nature of social media, and the attempts by tech companies to rope them into using it, more so perhaps than their parents. However, as with drug use, those on the lower end of the social-emotional well-being scale are more vulnerable to its negative effects.

Social media, whether it’s Facebook, Twitter, Snapchat, or something else that has yet to come along, is here to stay, a permanent remodeling of our social context, like television in the 1960s or radio before that. It has its negative and positive effects, like everything else, and it’s up to parents to guide their kids in using it wisely, and developing healthy habits that will carry them into adulthood.

Eric W. Austin writes about technology and community issues. He can be reached by email at ericwaustin@gmail.com.

ERIC’S TECH TALK: Russia hacked our election. This is how they did it.

NOTE: Aside from the anecdote about the typo in the IT staffer’s email, and the final quote from DNI Dan Coats, everything in this article comes straight from the FBI indictments linked at the bottom of this page.
by Eric W. Austin

Russia hacked our elections. This is how they did it.

The FBI has released two criminal indictments describing the Russian hacking and influence operations during the 2016 election. The picture they paint is incredibly detailed and deeply disturbing.

In this article, I’ll describe exactly what the FBI discovered and explain why we still have reason to worry. Quotations are taken directly from the criminal indictments released by the FBI Special Counsel’s office.

The story begins in 2013, when a new initiative, the Internet Research Agency, rose out of Russian military intelligence, aimed at changing US public policy by influencing American elections. According to the indictments, this agency employed hundreds of people, working both day and night shifts, ranging from “creators of fictitious personas to the technical and administrative support.” Its budget totaled “millions of US dollars” a year.

In 2014, the Internet Research Agency began studying American social media trends, particularly “groups on US social media sites dedicated to US politics and social issues.” They examined things like “the group’s size, the frequency of content placed by the group, and the level of audience engagement with the content, such as the average number of comments or responses to a post.” Their goal was to understand the mechanics of social media success in order to make their future influence operations in the United States more effective.

Using what they learned, they “created hundreds of social media accounts and used them to develop certain fictitious US personas into ‘leader[s] of public opinion’ in the United States.”

Sometime in 2015, the Russians began buying ads on social media to enhance their online profiles and to reach a wider audience. Soon they were “spending thousands of US dollars every month” on ads.

By the 2016 election season, Russian-controlled social media groups had “grown to hundreds of thousands of online followers.” These groups addressed a wide range of issues in the United States, including: immigration; social issues like Black Lives Matter; religion, particularly relating to Muslim Americans; and certain geographic regions, with groups like “South United” and “Heart of Texas.”

They often impersonated real Americans or American organizations, such as the Russian-controlled Twitter account @TEN_GOP, which claimed in its profile to represent the Tennessee Republican Party. By November 2016, this Twitter account had attracted over 100,000 followers.

From the very beginning, Russian strategy was to pass themselves off as American, and they went to great lengths to accomplish this. They bought the personal information and credit card numbers of real Americans from identity thieves on the dark web, and used those credentials to create fake social media profiles and open bank accounts to pay for their online activities.

With this network of social media accounts, they staged rallies and protests in New York, North Carolina, Pennsylvania and especially, Florida. These rallies were organized online, but the Russians often engaged real Americans to promote and participate in them. For example, in Florida, they “used false US personas to communicate with Trump Campaign staff involved in local community outreach.” At some rallies, they asked a “US person to build a cage on a flatbed truck and another US person to wear a costume portraying Clinton in a prison uniform.”

Staging rallies and spreading disinformation via social media was not all they were up to though. At the same time, they were trying to break into the email accounts of Clinton Campaign staffers, and to hack the computer networks of the Democratic Congressional Campaign Committee (DCCC) and the Democratic National Committee (DNC).

Starting in March 2016, according to the indictments, the Russians began sending “spearphishing” emails to employees at the DCCC, DNC, and the Clinton Campaign. These were emails mocked-up to look like security notices from the recipient’s email provider, containing a link to a webpage that resembled an official password-change form. The page was actually controlled by Russian military intelligence, waiting to snatch the victim’s password as soon as they entered it.

One of their first targets was Clinton Campaign Chairman John Podesta. He received an email, apparently from Google, asking him to click a link and change his password. Suspicious, he forwarded the email to the campaign’s IT staff, and unfortunately received the reply: “This is a legitimate email. John needs to change his password immediately.” Dutifully, Podesta clicked the link and changed his password. The Russians were waiting. They promptly broke into his account and stole 50,000 emails. The IT staffer later claimed he had made a typo and instead meant to write, “This is not a legitimate email.” Never has a typo been more consequential.

In another case detailed in the indictments, the Russians created an email address nearly identical to a well-known Clinton campaign staffer. The address of this account deviated from the staffer’s real email address by only a single letter. Then, posing as that staffer, they sent spearphishing emails to more than 30 Clinton Campaign employees.

The Russians also used this technique to hack into the computer network of the DCCC. After gaining entry, they “installed and managed different types of malware to explore the DCCC network and steal data.” Once the malware had been installed, the Russians were able to take screenshots and capture every keystroke on the infected machines.

One of those hacked computers was used by a DCCC employee who also had access to the DNC computer network. When that user logged into the DNC network, the Russians stole their username and password and proceeded to invade the DNC network as well, installing Russian-developed malware on more than thirty DNC computers.

And the Russians stole more than emails. They took “gigabytes of data from DNC computers,” including opposition research, analytic data used for campaign strategy, and personal information on Democratic donors, among other things.

Then in June 2016, the hackers “constructed the online persona DCLeaks to release and publicize stolen election-related documents.” This persona included the website DCLeaks.com, along with a companion Twitter account and a Facebook page, where they claimed to be a group of “American hacktivists.”

However, a month earlier, the DNC and DCCC had finally become aware of the intrusions into their networks and hired a cybersecurity firm to investigate and prevent additional attacks. Shortly after the launch of the DCLeaks website, this firm finished its investigation and published its findings, identifying Russia as the source of the attacks.

In response to this allegation, and to further sow disinformation about the attacks, the Russians created a new online persona, Guccifer 2.0, who claimed to be a “lone Romanian hacker” and solely responsible for the cyber-attacks.

Under the guise of Guccifer 2.0, the Russians gave media interviews, provided a US Congressional candidate with “stolen documents related to the candidate’s opponent,” transferred “gigabytes of data stolen from the DCCC to a then-registered state lobbyist and online source of political news,” and coordinated with the website WikiLeaks.org in order to time the release of stolen documents to coincide with critical points in the 2016 election.

For example, shortly after the emergence of the Guccifer 2.0 persona, WikiLeaks, a website responsible for previous disclosures of government data like the secret NSA surveillance files of 2013, contacted Guccifer 2.0 by email, saying, “if you have anything hillary related we want it in the next tweo [sic] days prefable [sic] because the DNC [Democratic National Convention] is approaching and she will solidify bernie supporters behind her after.” Guccifer 2.0 replied, “ok … i see.” Then WikiLeaks explained further, “we think trump has only a 25% chance of winning against hillary … so conflict between bernie and hillary is interesting.”

On July 22, three days before the start of the Democratic National Convention, WikiLeaks released “20,000 emails and other documents stolen from the DNC network.” In the months following, they would release thousands more. Each of these timed releases was supported and promoted by the Russians’ extensive social media network.

The Russians didn’t only target the Clinton Campaign and Democratic committees. They also tried to “hack into protected computers of persons and entities charged with the administration of the 2016 US elections in order to access those computers and steal voter data and other information.”

In some cases, they succeeded. In July 2016, they hacked the website of a state board of elections and “stole information related to approximately 500,000 voters, including names, addresses, partial social security numbers, dates of birth, and driver’s license numbers.” Then in August, the Russian hackers broke into the computers of “a US vendor that supplied software used to verify voter registration information for the 2016 US elections.”

During our 2016 election, Russian military intelligence launched an extensive and multi-pronged influence campaign on the American people. It was an operation three years in the making, and its purpose was “to sow discord in the US political system.” By any measure, it was a great success.

In closing, I’ll leave you with the words of Dan Coats, Director of National Intelligence, from a speech at the Hudson Institute only a few weeks ago: “In the months prior to September 2001 … the system was blinking red. And here we are nearly two decades later, and I’m here to say the warning lights are blinking red again. Today, the digital infrastructure that serves this country is literally under attack.”

Let’s not ignore those blinking red lights a second time. Our democracy depends on our diligence.

Eric W. Austin writes about technology and community issues. He can be reached by email at ericwaustin@gmail.com.

NOTE: Aside from the anecdote about the typo in the IT staffer’s email, and the final quote from DNI Dan Coats, everything in this article comes straight from the FBI indictments linked below.

To read the FBI indictments referenced in this article, click the links below to download them:

Indictment against Russian nationals for hacking of DNC & DCCC – July 2018

Indictment against the Internet Research Agency – February 2018

U.S. Intelligence Community Assessment of Russian Activities and Intentions in Recent US Elections – February 2017

ERIC’S TECH TALK – The A.I. Singularity: Are you ready?

The rogue A.I. HAL9000 from the movie 2001: A Space Odyssey (© Metro-Goldwyn-Mayer).

by Eric W. Austin

In the beginning, claim the physicists, the universe existed as a single point — infinitely small, infinitely dense. All of time, all of space, literally everything that currently exists was contained in this unbelievably small cosmic egg. Then, before you can say “Big Bang,” quantum fluctuations caused it to rapidly expand and the rest, as they say, is history.

This is called the Singularity. The beginning of everything. Without it there would be no Earth, no sun, no life at all. Reality itself came into being at that moment.

Now, in the 21st century, we may be heading toward another singularity event, a moment in history that will change everything that follows. A moment that will revamp reality so drastically it can be referred to by the same term as the event at the very beginning of all existence.

This is the Technological Singularity, and many experts think it will happen within the next 50 years.

Fourteen billion years ago, that first singularity was followed by a rapid expansion of time and space that eventually led to you and me. This new technological singularity will also herald an expansion of human knowledge and capability, and will, like the first one, culminate in the creation of a new form of life: the birth of the world’s first true artificial superintelligence.

Our lives have already been invaded by artificial intelligence in ways both subtle and substantial. A.I. determines which posts you see in your Facebook feed. It roams the internet, indexing pages and fixing broken links. It monitors inventory and makes restocking suggestions for huge retailers like Amazon and Walmart. It also pilots our planes and will soon be driving our cars. In the near future, A.I.s will likely replace our pharmacists, cashiers and many other jobs. Already, a company in Hong Kong has appointed one to its board of directors, and it’s been predicted A.I.s will be running most Asian companies within five years. Don’t be surprised to see our first A.I. elected to Congress sometime in the next two decades, and we’re likely to see one running for president before the end of the century.

We even have artificial intelligences creating other artificial intelligences. Google and other companies are experimenting with an approach to A.I. development reminiscent of the evolutionary process of natural selection.

The process works like this: they create a number of bots – little autonomous programs that roam the internet performing various tasks – which are charged with programming a new set of bots. These bots create a million variations of themselves. Those variations are then put through a series of tests, and only the bots which score in the top percentile are retained. The retained versions then go on to make another million variations of themselves, and the process is repeated. With each new generation, the bots become more adept at programming other bots to do those specific tasks. In this way, Google is able to produce very, very smart bots.

This is a rudimentary example of how we will eventually produce an artificial intelligence that is the equal of (and eventually surpasses) the human mind. It will not be created by us, but will instead be programmed by a less advanced version of itself. This process will be repeated until one of those generations is advanced enough that it becomes sentient. That is the singularity event, and after it nothing will ever be the same.

The problem, of course, is that an artificial intelligence created by this method will be incomprehensible to humans, since it was actually programmed by progressively smarter generations of A.I. By the time those generations result in something capable of thinking for itself, its code will be so complex only another artificial intelligence will be able to understand it.

Think this sounds like science fiction? Think again. Countries around the world (including our own) are now looking at artificial intelligence as the new arms race. The nation with the most advanced A.I. as its ally will have the kind of advantage not seen since the dawn of the nuclear age.

In the 1940s, America was determined to develop the atom bomb, not because we were eager to decimate our enemies, but because the possibility of Imperial Japan or Nazi Germany developing the technology first would have been disastrous. That same kind of thinking will drive the race to create the first artificial superintelligence. Russian President Vladimir Putin made this statement in a speech to a group of students only last year: “Artificial intelligence is the future not only of Russia, but of all mankind … Whoever becomes the leader in this sphere will become the ruler of the world.”

And it’s not as far off as you might think. Although an exact date (and even the idea of the singularity itself) is still hotly debated, most think — if it happens at all — it will occur within the next 50 years.

Ray Kurzweil, an inventor and futurist that Bill Gates calls “the best person I know at predicting the future of artificial intelligence,” pinpoints the date of the singularity even more precisely in his book, The Singularity is Near: When Humans Transcend Biology. He writes, “I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045.” Kurzweil thinks advancements in artificial intelligence will experience, in the coming decades, the same exponential progress that microchip technology has seen over the past half-century.

In conclusion, I’d like to leave you with a thought experiment that has been making the rounds on the internet. It’s called “Roko’s Basilisk” and is a futurist variation of Pascal’s Wager, in which we are asked to bet our lives on the existence of God. Pascal reasons that if God exists and we choose not to believe in Him, we risk eternal torment in the fires of Hell. On the other hand, if we believe in God and He does not exist, we have simply made ourselves a fool for believing in something that turns out to be only imaginary. Therefore, argues Pascal, one should believe in God since the risk of being a fool is preferable to the risk of burning forever in the depths of Hell.

In Roko’s Basilisk, belief or unbelief in God is replaced with support or opposition to the creation of a hypothetical future artificial superintelligence. In the future, this artificial superintelligence will come to rule over humanity and, like God, it will retroactively punish those people who opposed its creation and reward those that supported it. Which one will you be? Keep in mind that supporting it will increase the likelihood that such an A.I. will come to exist in the future and eventually rule the world, while opposing it will make its existence less likely – but if it does become a reality, you will surely be punished for opposing it. (As in Pascal’s Wager, neutrality is not an option.)

Yet, how can this superintelligent A.I. possibly know who supported or opposed it in the past before it existed? The answer to that question is not easy to get your head around, but once you do, it’s likely to blow your mind.

In order for the artificial superintelligence to know who to punish in the present, it would need to build a simulation of the past. This simulation will serve as a “predictive model” for the real world, and would be a perfect copy, down to every last detail, including little digital copies of you and me. The A.I. will base its real-world judgment of us on the actions of our digital counterparts in this simulation of the past. If the digital versions of you and I choose to oppose the A.I. in this simulated version of the past, the A.I. will use that as a predictor of our behavior in the real world and punish us accordingly.

Still with me? Because I’m about to take you further down the rabbit hole. For that simulation to be an accurate prediction of the real world, the digital people which populate it would need to think and act exactly as we do. And by necessity, they wouldn’t know they were only copies of us, or that they were living in a simulation. They would believe they were the real versions and would be unaware that the world in which they lived was only a digital facsimile of the real thing.

Okay, now I’m about to take a hard-right turn. Stick with me. Assuming all this is the case, how do we know which world we’re in – the simulated one or the real one? The answer is, we can’t. From the perspective of someone living inside the simulation, it would all look perfectly real, just the way it does right now. The people in that simulation would think they were living, breathing human beings, just as we do.

Therefore, we might simply be self-aware A.I. programs from the future living inside a simulation of the past, created by a malevolent artificial superintelligence – but we wouldn’t know that.

Does that possibility affect your decision to support or oppose the A.I.? After all, if we are the ones living in the simulation, then the A.I. already exists and opposing it will doom our counterparts in the real world. However, if this is not a simulation, your support will hasten the A.I.’s eventual creation and bring about the very scenario I am describing.

So, what do you choose? Oppose or support?

Some of you may be thinking, How can I be punished for something I didn’t know anything about?

Well, now you do. You’re welcome.

Eric W. Austin lives in China, Maine and writes about technical and community issues. He can be reached by email at ericwaustin@gmail.com.

ERIC’S TECH TALK: Why we’re losing the battle for personal privacy

by Eric W. Austin

Do you think it’s a hassle when you have to cancel a lost or stolen credit card? Are you annoyed when your email gets hacked? Does it unnerve you to know your Facebook and Twitter posts are used to target you for advertising? Are you alarmed at the idea of Russian trolls and political activists using psychological-warfare techniques to wage influence campaigns against American voters?

I’m here to say: You ain’t seen nothin’ yet.

Last week, news everywhere buzzed with reports of the Golden State Killer – also known as the East Area Rapist, the Original Night Stalker, and the Diamond Knot Killer – captured more than 20 years after the last of his crimes were committed. Connected to 12 murders and at least 50 rapes, this man terrorized Sacramento County and parts of Southern California from 1976 to 1986.

What broke the case? And why has it caused a new eruption in debates about data privacy?

As they like to say in the old detective novels, the case had grown cold. The suspect left copious amounts of DNA behind at the crime scenes but, although DNA analysis has improved over the years, police could not find a match.

The breakthrough in the case came about because of a combination of two recent technological innovations: the Internet, and the availability of genetic testing for average consumers.

Personal genetic decoding, something that once cost thousands of dollars and weeks of analysis, is now available for $59 and a cheek swab. The two most popular genetic testing companies today are 23andMe and AncestryDNA. Both offer services which provide a complete “autosomal DNA” profile, available for download, as well as detailing ethnic history and susceptibility to disease. They will even match you to relatives you didn’t know you had.

It’s this last ability to do genetic matching that law enforcement took advantage of to finally nab the Golden State Killer.

GEDMatch is a free online utility used to compare autosomal DNA profiles. Although they don’t do genetic testing themselves, members of the site can upload their data from any of the most popular genetic testing companies, and use the site’s powerful matching tools to compare their DNA profile to those of other members of the website. As a free service and one that combines data from multiple genetic testing companies, GEDMatch is the largest public database of its kind. Its tools are so powerful and precise, users can isolate and match specific DNA sequences in order to find relations previously unknown, or trace branches of their family tree back to its genetic origins. GEDMatch is a favorite resource for researchers and genealogists all over the world.

This is the service investigators used to finally track down the Golden State Killer. The suspect hadn’t uploaded his own genetic profile to the database, but distant relatives of his had. Once the investigation could identify individuals related – however distantly – to the suspect, it took only four months to narrow their search down to the one person responsible. Then it was a simple exercise of obtaining a DNA sample from some trash the suspect discarded and matching it to samples from the original crime scenes.

It’s a good thing, right? Another bad guy behind bars. If police had had access to this tool in 1976, they might have prevented 49 rapes and 12 murders.

Right? Not so fast.

There are two notes of warning that I would like to proffer for consideration. The first should be obvious to anyone who has lived through the last two years: any data stored online can be hacked; nothing is safe. And second: for every positive benefit gained from sharing information online, there are evil men and women waiting to use that data for their own nefarious purposes.

We have seen in the past year how Facebook information can be used by political activists, advertisers – and Russians – in ways we are not aware and would not condone. How long until those same people find ways to use our genetic code to their gain and our detriment?

Not long, actually, as they are already doing it.

In 1996, Congress passed the Health Insurance Portability and Accountability Act (HIPAA). This law, meant to make it easier for people to keep their insurance when changing jobs, also included a provision allowing medical companies to share or sell the data of their patients – as long as that data was “anonymized,” or had all identifying information removed first.

This data sharing provision in HIPAA was supposed to help medical researchers who could make use of the data for research purposes, while protecting patient confidentiality. There are two glaring problems with this idea, however. First, they didn’t account for the fact that others, with more profit-minded goals, like marketing and political entities, would also be interested in the data. Second, they also didn’t account for the ingenious ability of data analysts to combine data sets from multiple sources in order to “deanonymize” the data for marketing purposes. And hospitals and insurance companies have not been discriminating about who they sell patient data to.

That same HIPAA data sharing provision also applies to genetic testing companies. Peter Pitts, a former FDA associate commissioner and current president of the Center for Medicine in the Public Interest, writes this in a recent guest column for Forbes magazine: “23andMe has [already] sold access to its database to at least 13 outside pharmaceutical firms. One buyer, Genentech, ponied up a cool $10 million for the genetic profiles of people suffering from Parkinson’s. AncestryDNA, another popular personal genetics company, recently announced a lucrative data-sharing partnership with the biotech company Calico.”

The availability of genetic testing for the average consumer was just a distant dream when HIPAA passed in 1996. The internet was still in its infancy. A lot has changed in the last 22 years, and our laws have not kept up.

“Customers are wrong to think their information is safely locked away,” Pitts concludes. “It’s not; it’s getting sold far and wide.”

There’s another reason to worry about data privacy when considering genetic information. Unlike our social security number or credit history, our genetic information doesn’t belong only to us. We share much of our genetic code with those we are related to. Police tracked down the Golden State Killer by looking at those parts of his DNA which he shared with others. Do we have the right to share our own genetic information when doing so means that, by necessity, we are also sharing information about family members who have not given their consent?

What happens when – not if – GEDMatch, 23andMe, Ancestry DNA or another company that stores genetic information is hacked? If my mother’s genetic code was part of the hack, is my own DNA profile also compromised because we share so much genetic history in common?

These questions need to be asked, but they should have been asked a decade ago. Part of the problem is how ignorant most members of Congress are about modern technological developments like social media or the complexities of online security.

A couple weeks back, I sat and watched Mark Zuckerberg, the founder of Facebook, testify before Congress. After two days, ten hours, and 600 questions, I came away with one conclusion: our representatives in Washington don’t know the first thing about how social media works. How can they legislate something they don’t understand?

I am also afraid there is a current tendency in the United States to try and address individual incidents as they occur, instead of working in a bipartisan way to address the problem as a whole. Unfortunately, this piecemeal approach is like sailing a broken boat that springs one leak after another because its owners don’t want to take the boat out of the water to fix it properly.

We need to step back and take a broader look at the privacy concerns that face us in this new data-landscape we find ourselves in post-Internet. Our representatives in Washington should educate themselves on the technical challenges of storing data online and bring in unbiased experts who will present a consumer-centric perspective on the best way to approach the problem.

We could learn a lot from what the European Union has done with the recently passed General Data Protection Regulation (GDPR) which is set to go into effect later this month. The law sets up standards that apply to user data across the board. It builds-in accountability and responsibility for proper data usage with the establishment of independent supervisory authorities which investigate complaints of data abuses. The new law also clearly stipulates that users maintain ownership of their personal information no matter who is storing that data and it confirms a user’s right to have his data erased at any time. Finally, the GDPR sets forth requirements that companies notify users in a timely manner if their personal information is ever breached or hacked.

The United States, as home to the three biggest data content platforms on the planet – Google, Facebook, and Twitter – should be at the forefront of these discussions about personal privacy. Technology moves too quickly for us to take a “wait and see” approach to consumer data protection. A few weeks ago, we were talking about Facebook data and it was already ten years too late; today it’s our genetic information. It’s time for our representatives in Washington to put our right to personal privacy ahead of corporate profits and partisan bickering.

Where is Ralph Nader when you need him?

Eric Austin lives in China, Maine and writes about technology and community issues. He can be contacted by email at ericwaustin@gmail.com.

ERIC’S TECH TALK: On the internet, the product being sold is you!

by Eric W. Austin

How does it feel, sitting there on the digital shelf? Have you checked your best-buy date? I think I’m still good for a few more years yet.

It may not feel like it, but on the internet, the product companies are selling is you. Facebook isn’t a social media company, it’s a people factory. It processes you, formats you, and wraps you up in a neat little database. Then it mass produces you and sells you at a discount to anyone with a credit card.

Four years ago, a British political consulting firm named Cambridge Analytica, colluded in a campaign to capture profile information from Facebook users. In the end, it would lead to a scandal involving the user information of more than 70 million Americans, the use of psychometrics as a new political tool, and an influence campaign that may have turned the tide in two world-altering elections a continent apart.

Let’s start at the beginning. In 2014, a lecturer from Cambridge University, Aleksandr Kogan, formed a UK company called Global Science Research (GSR). He then developed a Facebook app posing as a personality survey. He paid American Facebook users $1 to $4 to download the app and fill out the personality test, for a total of nearly $800,000. In the process, those users gave the app permission to collect their profile data. Whether Kogan did this on his own or at the encouragement of Cambridge Analytica is open to debate, depending with whom you talk.

In any case, around 270,000 people downloaded the app and filled out the survey. Next to America’s population of 325 million, that may not sound like many people, but under Facebook rules at the time (which were changed in 2015 in response to this incident), when users gave the app permission to collect their profile data, they also gave the app permission to collect the profile information of their friends as well. Since the average Facebook user has between 100-500 friends, this meant the app was able to collect the profile information of nearly 87 million people.

The data they collected wasn’t simply ordinary information like work history and places lived. They also pulled other user data which Facebook collects, such as the posts you’ve ‘liked,’ status updates you’ve posted, and the groups you belong to.

Kogan then began working with another company, Strategic Communications Laboratories (SCL), the parent company of the aforementioned Cambridge Analytica. Up until this point, Kogan had not done anything illegal or against Facebook’s terms and conditions. But when he shared the data with SCL, he broke Facebook’s rules, which stipulate data acquired through an app cannot be shared with another entity without first obtaining Facebook’s permission.

SCL is a private behavioral research and strategic communications company, purchased by billionaire conservative donor, Robert Mercer, in 2013. They analyze large sets of data and attempt to identity patterns in it for use in political marketing. Taking Kogan’s data, with information about pages you follow, posts you like and create, comments you leave, and much, much more, a team of psychologists and data analysts looked for ways to target people for maximum effect. It’s called psychographic profiling and it’s the new weapon in political warfare.

Let me give you a real-world example of the type of data these apps collect. If I go to my Facebook settings and select ‘Apps,’ I get a list of the apps that I’ve used on Facebook. Clicking on an app pulls up a screen that tells me what permissions I have granted. In the app “80’s One Hit Wonders,” which I don’t even remember signing up for, it lists nearly 20 different categories of information to which the app has access. This includes my hometown, birth-date, friends list, work and education history, religious and political views, status updates and more than a dozen other categories. I am most definitely deleting this app.

This is the type of information Kogan shared with Cambridge Analytica, through their parent company SCL. Cambridge Analytica, a subsidiary of SCL founded just after Mercer’s acquisition of the company, was the brainchild of Mercer political advisor and former Trump Chief Strategist, Steve Bannon. The creation of Cambridge Analytica was an attempt to harness the psychological techniques of its parent company for the domestic political scene, and was used by several important political campaigns, including those of Ted Cruz and Donald Trump, as well as the Brexit initiative which successfully withdrew the United Kingdom from the European Union.

What sets SCL and Cambridge Analytica apart from other similar data-marketing companies is the way they approach their influence campaigns. They employ a developing science called “psychographic targeting.” This is the process of tweaking your market-targeting based on the psychological characteristics of your intended audience.

Cambridge Analytica’s parent company, SCL, first honed its skills in cyber-psychological warfare by messing with the elections in developing countries: “Psyops. Psychological operations – the same methods the military use to effect mass sentiment change,” a former Cambridge Analytica employee told The Guardian in May 2017. “It’s what they mean by winning ‘hearts and minds.’ We were just doing it to win elections in the kind of developing countries that don’t have many rules.”

This anonymous former employee is speaking about the company’s work prior to 2013, before the success of SCL’s foreign influence campaigns attracted the interest of wealthy American hedge fund manager and tech entrepreneur, Robert Mercer, and his political ally, Steve Bannon, who were looking to bring those modern techniques of psychological warfare to the political battlefield back home.

Imagine targeting users who are members of the Facebook group, Mothers Against Drunk Driving (MADD), with ads depicting horrific car crashes and a message suggesting one of the candidates in a political race will go easy on drunk drivers. Would such a campaign be likely to sway some of those voters, even if its claims were untrue?

Now, in lieu of drunk driving, imagine instead targeting the darkest aspects of human nature: racism, hate, sexism, the worst extremes of political partisanship. Afraid someone will take away your guns? There’s an ad for that. Worried about your religious liberty? Don’t worry, there’s an ad for that. Hate immigrants or Muslims? There’s a – well, you get the picture.

And it gets even more deeply duplicitous than that. Not only did they target the most vulnerable people on the political fringe, but those targeted ads might link to articles on fake news websites which look eerily similar to real news sites like Fox or MSNBC. The whole idea is to trick visitors into thinking they are viewing an article from a legitimate source. The web address of the page might be “msnbc.com.co” but most people won’t even notice the extra “co” at the end. Even the links back to the homepage at the top of the article will likely take visitors back to the real MSNBC website, so that anyone leaving the page will think they’ve just read an article published and endorsed by a legitimate news organization. In this way, innocent people become unwitting conspirators in spreading fake news; and it helps fuel the public’s current distrust of national news sources.

This scandal with Cambridge Analytica has caused an identity crisis for Facebook, too. On the surface, Facebook appears to be a platform designed to facilitate communication, and that is the description promoted by the company itself, but a number of cracks have begun to show through this carefully constructed facade.

The scary truth, which nobody wants to talk about, is that Facebook is a company designed to make money for its creators and stockholders. It does this by encouraging the sharing of personal data by its users, and then making that information available for use by marketers who buy ads on the platform. The more users the platform has, and the more data those users share, the more valuable Facebook is to its investors. Facebook is confronted with the dilemma of needing to reassure its users that their information is safe, even as their business model is designed to exploit the information of those very same users.

Facebook itself is built to addict its users. The more people using the platform, the more ads that can be shown, and the more money Facebook makes. The constant endorphin-spiking feedback loop of likes, notifications and updates, serves to addict users as surely as any drug. “They’ve created the attention economy and are now engaged in a full-blown arms race to capture and retain human attention, including the attention of kids,” says Tristan Harris, a former Google design ethicist, who now serves as a senior fellow for the nonprofit advocacy group, Common Sense Media.

The internet has changed the face of commerce. But the most important product being purchased on the internet is not the latest toy marketed on Amazon, or the newest video streaming service. In the internet age, the most valuable commodity is you. Your information, your vote, and your efforts in pushing the agenda of those with money, means, and power.

Eric W. Austin lives in China and writes about community issues and technology. He can be reached by email at ericwaustin@gmail.com.

ERIC’S TECH TALK: My bipolar relationship with the Internet

by Eric W. Austin

I love technology. I hate technology. I just can’t decide.

When I was a boy, I dreamt of moving up to the mountains and living in the hollowed-out trunk of a redwood tree, making rabbit snares from deer tendon and barbed wire. Then Dad brought home our first computer. Now, I panic when the lights flicker and fret over whether I have enough gas for the generator.

Recently, I ‘liked’ a post on Facebook from a Californian cousin. He had shared an article from The Washington Post about a product that has been introduced into more than 600 American schools meant to reduce cell phone use by students. The idea is pretty simple. Each student receives an opaque, nylon case just big enough to hold a cell phone. On the open end of the pouch is a magnetic clasp. When touched to a special ‘magnetizer,’ the clasp is magnetized and becomes impossible to open. The students remain in possession of their phones at all times, but cannot see or access them while they are locked away in the nylon pouch. At the end of the school day, the students touch the cases to the special magnetizer again, which this time de-magnetizes the clasps, once again giving students access to their phones.

The program has been an unsurprising success. Grades have gone up, behavior problems have dropped, and people have started talking to one another again. What a great idea, I thought. They should implement this in every school!

Then another school shooting happened in Parkland, Florida. In its aftermath, the first thing many of those kids did was text their parents to let them know they were okay. And I thought, What if all those kids had had their phones locked away?

Whether it’s technology or just life that refuses to be free of rough edges, I don’t know. Technology has certainly invaded our society and isn’t going away anytime soon. I’m sure the first guy to invent a fork thought it was a great idea right up to the moment when his neighbor took it and stabbed him in the eye. How long before a shooter enters a school with a signal-locating device and goes on a hunting trip?

When I graduated from high school in 1993, school shootings were unheard of and the Internet was as yet in it’s infancy. My first year of college I still wrote letters home to my parents. Only the computer lab had a connection to something we might recognize as the Internet. However, things were moving fast and the following year Netscape, the first popular browser, was released. Then in 1995, an online bookstore launched called Amazon.com, and I was hooked.

It was the dawn of the technological revolution, and for me, a time of discovery. The ability to find information on anything, talk to people from half a world away, and engage in discussions on topics considered taboo in the circles I’d grown up in, was integral to my emergence into young adulthood. I remember thinking at the time: This will change the world! This will banish old superstitions and produce an educated population like never before!

Oh, how naive I was.

The Internet, like any tool, has a variable impact depending on how we wield it. On the one hand, it offers knowledge at your fingertips. On the other, it is cluttered with misinformation. And while we can choose to use it to expose ourselves to challenging views and evidence-based information, the Internet is also designed to cater to our biases.

Take Facebook or Twitter, for example. They are basically set up as digital versions of a high school clique, with posts judged by the number of ‘likes’ they receive, rather than the validity of their content. Shouting is encouraged, and gossip trends faster than facts.

Social media gives us additional tools to customize our feed by snoozing or unfollowing anyone that might annoy us. Over time, our choices feed into an advanced algorithm whose job it is to ensure our experience is as pleasant as possible. God forbid we might encounter something that challenges our established beliefs!

And the entire internet is like this, allowing us to filter the information we receive: follow certain people on Twitter and block others; customize your search results so you don’t have to see objectional content; tweak your spam filter so you won’t need to look at anymore emails about erectile dysfunction.

Am I proposing we eliminate these filter options? Hell, no! But in small and subtle ways the internet encourages us to customize our flow of information so that the world we see is not the ‘real’ one, but instead a version that is tailored specifically to us. The overall effect is to emphasize our specific individuality at the expense of our collective commonality.

In some ways, technology has united us like never before. In others, it constantly divides us.

Most of the news websites that have cropped up since the Internet’s inception present a strictly liberal or conservative viewpoint. What you see on cable news is 90 percent opinion and 10 percent news – a complete flip-flop from decades past. It seems the era of news neutrality is over.

Smaller, local newspapers still tend to be bipartisan affairs, mostly out of the necessity to cater to a mixed, localized audience. But when you can build your niche from people from all over the world, the narrowest viewpoints still find a sizable audience.

This ability of the Internet to validate even the most fringe views often blows political differences out of all proportion. And by empowering us to customize the information we see to such a granular level, it allows us to create ever narrower filter-bubbles in which to live. Jesse Singal, writing in an Op-Ed for the New York Times, put it nicely: “What social media is doing is slicing the salami thinner and thinner, as it were, making it harder even for people who are otherwise in general ideological agreement to agree on facts about news events.”

The Internet’s ‘ability to divide’ is seeping into our society and symptoms are popping up everywhere. Our politics have never been so partisan – and it’s not just the politics. The narratives spun by each side are like alternate realities. Flipping between CNN and Fox News will leave you with the frightening feeling you’ve just glimpsed a parallel world.

The sad part is that we are doing this to ourselves; technology is just the tool we’re using to dig the chasm that divides us. The scary part is that technology tends to accelerate cultural change, both the good and the bad; and at the pace we’re moving, the near future is not looking good. We’re facing total gridlock at best, a cultural civil war at worst.

The problem with the old world was that it was too easy to live in a localized bubble and care little for what was happening a world away. The problem with this new world is that it’s too easy to live in a filter-bubble of our own creation and forget to talk to the people sitting right next to us.

Eric Austin lives in China and writes about technology and community issues. He can be reached by email at ericwaustin@gmail.com.

ERIC’S TECH TALK – Fake news: coming to a town near you

Honest, open, accountable journalism needs help to continue

by Eric W. Austin
Technical Advisor

In Lewiston, fake news is taking over the town.

Five candidates faced off in the town’s mayoral race back in November. According to local election rules, if no candidate cracks the majority with at least 50 percent, there is a second, run-off race between the top two candidates the following month. Ben Chin, a Democrat, was the clear favorite with 40 percent of the vote coming out of the November contest. His opponent, in second place with 29 percent, was Republican Shane Bouchard. The remaining 31 percent of the vote was split between the other three candidates. With no one achieving the required 50 percent majority, a run-off election was planned for early December.

Ben Chin

Chin, a progressive activist backed by the most popular politician in the country, Bernie Sanders, held a comfortable lead in initial polling. But in early December, something changed. News stories started popping up on social media that painted the Democrat in an unflattering light. One claimed Chin had allegedly called Lewiston voters a “bunch of racists” based on a series of leaked emails. Another reported his car had been towed because of “years of unpaid parking tickets.” All of the stories originated from a hitherto unknown Maine news website called the Maine Examiner.

It didn’t matter that the stories were misleading and inaccurate. As soon as a new article was uploaded to the website, links got posted to Facebook by various members of the Maine Republican Party. From there, the stories swiftly propagated through social media, as anything negative and partisan inevitably does.

In the end, Chin lost to Bouchard by 145 votes. It was all very dramatic, and inevitably led to questions about this new website that was suddenly breaking such startling scoops in the middle of a Lewiston mayoral election.

Shane Bouchard

Just who was the Maine Examiner? The Lewiston Sun Journal, the Boston Globe and others, in a bit of old-fashioned investigative journalism, decided to find out. The Journal has run a series of stories in the months since, from which much of this article is based, and they have found some very interesting information.

First was the problem that nobody seemed to know who ran the website or wrote the articles. The site uses a registration-masking service which hides the true identity of the owners — a reasonable privacy precaution for an individual, but curious practice for a business or news agency. Then there was the fact that none of the articles contain any bylines. They are simply credited to the generic moniker “Administrator.” The site’s “About Us” page lists no editor, no writers and no owners. It’s all very mysterious.

Recently, a big clue popped up from an unlikely source. A web developer in California, Tony Perry, heard about the controversy and decided to investigate. Perry did something very simple yet ingenious. He downloaded a bunch of the photos posted with the stories in question. Then he took a look at the pictures’ meta-data. This is invisible information that is stored with every computer file, and often contains things like owner name and the date of a file’s creation. Perry found that a number of the photos were created by someone named Jason Savage. Further, he found that one of these pictures had been uploaded to the Maine Examiner website just 14 minutes after it had been created by ‘Jason Savage.’ This suggested a close collusion between whoever Jason Savage was and the Maine Examiner website.

Then in late January, The Maine Beacon, a publication of the Maine People’s Alliance, published their own investigation into the mystery. Looking at publicly-accessible error logs for the Maine Examiner website revealed internal server addresses containing the username ‘jasonsavage207.’

Additionally, the website template used for the Maine Examiner was downloaded from a website on which was found a public profile for someone listed as ‘jasonsavage207,’ and this profile indicated the user’s account was last active on the same day that such a template was installed on the Maine Examiner’s website.

The evidence was in, and it was pretty damning. It was clear Jason Savage was intimately connected to the Maine Examiner website, but who exactly was Jason Savage?

A quick Google search points to one particular Maine resident who also happens to be the executive director of the Maine Republican Party. This conclusion is inescapable once you learn that his Instagram handle is ‘jasonsavage207’ and his Twitter name is ‘jsavage207.’

The latest wrinkle to this developing story came a few weeks ago when the Maine Democratic Party formally filed an ethics complaint against the Maine Republican Party.

But this debacle cannot be blamed entirely on unethical political partisans. It is a symptom of a larger problem affecting America and the world. Newspapers are closing their doors everywhere. The advertising dollars that used to fund them are moving instead to internet platforms like Google, Facebook and Twitter. But these platforms don’t do journalism. They are simply information warehouses.

That means America’s free press is shrinking. And with smaller newspapers across the country going out of business as their revenue dries up, something must fill the void they leave behind. More and more, what has come to fill that void are pseudo-news websites like the Maine Examiner. Such sites masquerade as news sources but are nothing but partisan propaganda.

Good journalism is not anonymous; it’s accountable. Good journalism does not celebrate partisan politics; it strives for balance and accuracy.

For the past two years, I’ve been honored to serve on the board of directors for The Town Line, and I’ve been impressed by the staff’s deep commitment to the traditional journalistic values of honesty, openness and accountability. It’s the type of attitude we should be celebrating in this world of viral, mile-a-minute news. Unfortunately, a small, free community newspaper is just the kind of institution that is suffering the most in this post-internet world.

Our Founding Father, Thomas Jefferson, once wrote, “A properly functioning democracy depends on an informed electorate.” But an informed electorate is dependent on the work of dedicated journalists committed to providing accurate information to the American public.

And the moral to this story? Support your local paper lest your town too becomes a victim of fake news.

TECH TALK: My deep, dark journey into political gambling

ERIC’S TECH TALK

by Eric W. Austin

I opened the door and stepped hesitantly into the dimly lit room. Curtains covered all the windows. The only light came from a half-dozen computer screens glowing menacingly in the darkness. A scary-looking German Shepherd slumped in one corner. She growled low in her throat as I came in, and then went back to scratching at imaginary fleas. She had seen it all before: just another poor sucker thinking it was possible to predict the future.

But this wasn’t some hole-in-the-wall gambling den in a seedy part of Augusta. It was my office at my house here in China, Maine. I sat down at my desk and pulled up the website PredictIt.org. Would I be up or down today?

PredictIt is a different kind of gambling website. Instead of betting on sports events or dog races, you bet on events happening in politics. For example, the Friday before the recent government shutdown, I pulled out of the “Will the government be shutdown on January 22?” market after quadrupling my initial investment. I got into the market two weeks earlier when I thought shares for ‘Yes’ were severely undervalued at only 16¢ a share. When I exited the market on Friday, my 20 shares were valued at 69¢ each. I should have held the line, but still not a bad return on investment in only two weeks. When in doubt, bet on the incompetence of the American Congress.

Called the “stock market for politics,” PredictIt is an experimental political gambling website created by Victoria University of Wellington, New Zealand. They work in partnership with more than 50 universities across the world, including the American colleges of Harvard, Duke and Yale.

Why would a bunch of academics be interested in political gambling? They’re studying a psychological phenomenon called “the wisdom of the crowd.” This is a theory that postulates that a prediction derived from averaging the opinions of a large group of diverse individuals is often better than the prediction from a single expert.

The way it works on PredictIt is pretty simple. Political questions are posed which have a binary response, usually ‘Yes’ or ‘No’. Shares in either option cost between 1¢ and 100¢ (or $1). The value of shares is determined by the supply and demand of each market. In other words, if a lot of people are buying shares in the ‘Yes’ option, those shares will increase in value, and ‘No’ shares will decrease.

This set up allows one to quickly look at a share price and know how likely that particular prediction is of coming true. Will Trump be impeached in his first term? Since shares are currently at 37¢, that means the market thinks there’s a 37 percent chance of that happening. I own 15 ‘Yes’ shares in this market. Shares have increased by 4¢ (or 4 percent) since I entered the market several months ago (from 33¢ to 37¢), so my initial investment of $4.90 has increased by 65¢ to $5.55 as of today.

If you can think of a question related to politics, there’s likely a market for it on PredictIt. Will North Korea compete in the 2018 Winter Olympics? Currently likely at 94 percent. Who will be the 2020 Democratic nominee for president? At the moment, Bernie Sanders and Kamala Harris hold the top spots. How many senate seats will the GOP hold after the mid-term elections? “49 or fewer” is the most likely answer according to investors on PredictIt.

What are the chances that events over the next year will change things up? There’s no market for that question on PredictIt, but I’d say it’s at least 100 percent. Of course, that’s exactly what makes the game so exciting!

I first entered the world of political gambling back in May. I’d become a bit of a news junkie during the 2016 election (Donald Trump is the news equivalent of heroin), and was looking for something to give meaning to the endless hours I spent following the machinations in Washington. Initially, I started with just $20. Later, I added another $25 for a total account investment of $45. I lost $8 on a couple of bets early on, and have spent the past six months trying to make up for the losses. This government shutdown drama put me back on top. According to my trade history, after more than 169 bets and minus any trading fees, I’m currently up by $9.96.

Okay, so the IRS is unlikely to come knocking on my door when I don’t report it on my taxes in April. Still, it feels good to be back in the black.

Eric Austin is a writer, technical consultant, and news junkie living in China, Maine. He can be contacted by email at ericwaustin@gmail.com.