ERIC’S TECH TALK: On the internet, the product being sold is you!

by Eric W. Austin

How does it feel, sitting there on the digital shelf? Have you checked your best-buy date? I think I’m still good for a few more years yet.

It may not feel like it, but on the internet, the product companies are selling is you. Facebook isn’t a social media company, it’s a people factory. It processes you, formats you, and wraps you up in a neat little database. Then it mass produces you and sells you at a discount to anyone with a credit card.

Four years ago, a British political consulting firm named Cambridge Analytica, colluded in a campaign to capture profile information from Facebook users. In the end, it would lead to a scandal involving the user information of more than 70 million Americans, the use of psychometrics as a new political tool, and an influence campaign that may have turned the tide in two world-altering elections a continent apart.

Let’s start at the beginning. In 2014, a lecturer from Cambridge University, Aleksandr Kogan, formed a UK company called Global Science Research (GSR). He then developed a Facebook app posing as a personality survey. He paid American Facebook users $1 to $4 to download the app and fill out the personality test, for a total of nearly $800,000. In the process, those users gave the app permission to collect their profile data. Whether Kogan did this on his own or at the encouragement of Cambridge Analytica is open to debate, depending with whom you talk.

In any case, around 270,000 people downloaded the app and filled out the survey. Next to America’s population of 325 million, that may not sound like many people, but under Facebook rules at the time (which were changed in 2015 in response to this incident), when users gave the app permission to collect their profile data, they also gave the app permission to collect the profile information of their friends as well. Since the average Facebook user has between 100-500 friends, this meant the app was able to collect the profile information of nearly 87 million people.

The data they collected wasn’t simply ordinary information like work history and places lived. They also pulled other user data which Facebook collects, such as the posts you’ve ‘liked,’ status updates you’ve posted, and the groups you belong to.

Kogan then began working with another company, Strategic Communications Laboratories (SCL), the parent company of the aforementioned Cambridge Analytica. Up until this point, Kogan had not done anything illegal or against Facebook’s terms and conditions. But when he shared the data with SCL, he broke Facebook’s rules, which stipulate data acquired through an app cannot be shared with another entity without first obtaining Facebook’s permission.

SCL is a private behavioral research and strategic communications company, purchased by billionaire conservative donor, Robert Mercer, in 2013. They analyze large sets of data and attempt to identity patterns in it for use in political marketing. Taking Kogan’s data, with information about pages you follow, posts you like and create, comments you leave, and much, much more, a team of psychologists and data analysts looked for ways to target people for maximum effect. It’s called psychographic profiling and it’s the new weapon in political warfare.

Let me give you a real-world example of the type of data these apps collect. If I go to my Facebook settings and select ‘Apps,’ I get a list of the apps that I’ve used on Facebook. Clicking on an app pulls up a screen that tells me what permissions I have granted. In the app “80’s One Hit Wonders,” which I don’t even remember signing up for, it lists nearly 20 different categories of information to which the app has access. This includes my hometown, birth-date, friends list, work and education history, religious and political views, status updates and more than a dozen other categories. I am most definitely deleting this app.

This is the type of information Kogan shared with Cambridge Analytica, through their parent company SCL. Cambridge Analytica, a subsidiary of SCL founded just after Mercer’s acquisition of the company, was the brainchild of Mercer political advisor and former Trump Chief Strategist, Steve Bannon. The creation of Cambridge Analytica was an attempt to harness the psychological techniques of its parent company for the domestic political scene, and was used by several important political campaigns, including those of Ted Cruz and Donald Trump, as well as the Brexit initiative which successfully withdrew the United Kingdom from the European Union.

What sets SCL and Cambridge Analytica apart from other similar data-marketing companies is the way they approach their influence campaigns. They employ a developing science called “psychographic targeting.” This is the process of tweaking your market-targeting based on the psychological characteristics of your intended audience.

Cambridge Analytica’s parent company, SCL, first honed its skills in cyber-psychological warfare by messing with the elections in developing countries: “Psyops. Psychological operations – the same methods the military use to effect mass sentiment change,” a former Cambridge Analytica employee told The Guardian in May 2017. “It’s what they mean by winning ‘hearts and minds.’ We were just doing it to win elections in the kind of developing countries that don’t have many rules.”

This anonymous former employee is speaking about the company’s work prior to 2013, before the success of SCL’s foreign influence campaigns attracted the interest of wealthy American hedge fund manager and tech entrepreneur, Robert Mercer, and his political ally, Steve Bannon, who were looking to bring those modern techniques of psychological warfare to the political battlefield back home.

Imagine targeting users who are members of the Facebook group, Mothers Against Drunk Driving (MADD), with ads depicting horrific car crashes and a message suggesting one of the candidates in a political race will go easy on drunk drivers. Would such a campaign be likely to sway some of those voters, even if its claims were untrue?

Now, in lieu of drunk driving, imagine instead targeting the darkest aspects of human nature: racism, hate, sexism, the worst extremes of political partisanship. Afraid someone will take away your guns? There’s an ad for that. Worried about your religious liberty? Don’t worry, there’s an ad for that. Hate immigrants or Muslims? There’s a – well, you get the picture.

And it gets even more deeply duplicitous than that. Not only did they target the most vulnerable people on the political fringe, but those targeted ads might link to articles on fake news websites which look eerily similar to real news sites like Fox or MSNBC. The whole idea is to trick visitors into thinking they are viewing an article from a legitimate source. The web address of the page might be “” but most people won’t even notice the extra “co” at the end. Even the links back to the homepage at the top of the article will likely take visitors back to the real MSNBC website, so that anyone leaving the page will think they’ve just read an article published and endorsed by a legitimate news organization. In this way, innocent people become unwitting conspirators in spreading fake news; and it helps fuel the public’s current distrust of national news sources.

This scandal with Cambridge Analytica has caused an identity crisis for Facebook, too. On the surface, Facebook appears to be a platform designed to facilitate communication, and that is the description promoted by the company itself, but a number of cracks have begun to show through this carefully constructed facade.

The scary truth, which nobody wants to talk about, is that Facebook is a company designed to make money for its creators and stockholders. It does this by encouraging the sharing of personal data by its users, and then making that information available for use by marketers who buy ads on the platform. The more users the platform has, and the more data those users share, the more valuable Facebook is to its investors. Facebook is confronted with the dilemma of needing to reassure its users that their information is safe, even as their business model is designed to exploit the information of those very same users.

Facebook itself is built to addict its users. The more people using the platform, the more ads that can be shown, and the more money Facebook makes. The constant endorphin-spiking feedback loop of likes, notifications and updates, serves to addict users as surely as any drug. “They’ve created the attention economy and are now engaged in a full-blown arms race to capture and retain human attention, including the attention of kids,” says Tristan Harris, a former Google design ethicist, who now serves as a senior fellow for the nonprofit advocacy group, Common Sense Media.

The internet has changed the face of commerce. But the most important product being purchased on the internet is not the latest toy marketed on Amazon, or the newest video streaming service. In the internet age, the most valuable commodity is you. Your information, your vote, and your efforts in pushing the agenda of those with money, means, and power.

Eric W. Austin lives in China and writes about community issues and technology. He can be reached by email at

ERIC’S TECH TALK: My bipolar relationship with the Internet

by Eric W. Austin

I love technology. I hate technology. I just can’t decide.

When I was a boy, I dreamt of moving up to the mountains and living in the hollowed-out trunk of a redwood tree, making rabbit snares from deer tendon and barbed wire. Then Dad brought home our first computer. Now, I panic when the lights flicker and fret over whether I have enough gas for the generator.

Recently, I ‘liked’ a post on Facebook from a Californian cousin. He had shared an article from The Washington Post about a product that has been introduced into more than 600 American schools meant to reduce cell phone use by students. The idea is pretty simple. Each student receives an opaque, nylon case just big enough to hold a cell phone. On the open end of the pouch is a magnetic clasp. When touched to a special ‘magnetizer,’ the clasp is magnetized and becomes impossible to open. The students remain in possession of their phones at all times, but cannot see or access them while they are locked away in the nylon pouch. At the end of the school day, the students touch the cases to the special magnetizer again, which this time de-magnetizes the clasps, once again giving students access to their phones.

The program has been an unsurprising success. Grades have gone up, behavior problems have dropped, and people have started talking to one another again. What a great idea, I thought. They should implement this in every school!

Then another school shooting happened in Parkland, Florida. In its aftermath, the first thing many of those kids did was text their parents to let them know they were okay. And I thought, What if all those kids had had their phones locked away?

Whether it’s technology or just life that refuses to be free of rough edges, I don’t know. Technology has certainly invaded our society and isn’t going away anytime soon. I’m sure the first guy to invent a fork thought it was a great idea right up to the moment when his neighbor took it and stabbed him in the eye. How long before a shooter enters a school with a signal-locating device and goes on a hunting trip?

When I graduated from high school in 1993, school shootings were unheard of and the Internet was as yet in it’s infancy. My first year of college I still wrote letters home to my parents. Only the computer lab had a connection to something we might recognize as the Internet. However, things were moving fast and the following year Netscape, the first popular browser, was released. Then in 1995, an online bookstore launched called, and I was hooked.

It was the dawn of the technological revolution, and for me, a time of discovery. The ability to find information on anything, talk to people from half a world away, and engage in discussions on topics considered taboo in the circles I’d grown up in, was integral to my emergence into young adulthood. I remember thinking at the time: This will change the world! This will banish old superstitions and produce an educated population like never before!

Oh, how naive I was.

The Internet, like any tool, has a variable impact depending on how we wield it. On the one hand, it offers knowledge at your fingertips. On the other, it is cluttered with misinformation. And while we can choose to use it to expose ourselves to challenging views and evidence-based information, the Internet is also designed to cater to our biases.

Take Facebook or Twitter, for example. They are basically set up as digital versions of a high school clique, with posts judged by the number of ‘likes’ they receive, rather than the validity of their content. Shouting is encouraged, and gossip trends faster than facts.

Social media gives us additional tools to customize our feed by snoozing or unfollowing anyone that might annoy us. Over time, our choices feed into an advanced algorithm whose job it is to ensure our experience is as pleasant as possible. God forbid we might encounter something that challenges our established beliefs!

And the entire internet is like this, allowing us to filter the information we receive: follow certain people on Twitter and block others; customize your search results so you don’t have to see objectional content; tweak your spam filter so you won’t need to look at anymore emails about erectile dysfunction.

Am I proposing we eliminate these filter options? Hell, no! But in small and subtle ways the internet encourages us to customize our flow of information so that the world we see is not the ‘real’ one, but instead a version that is tailored specifically to us. The overall effect is to emphasize our specific individuality at the expense of our collective commonality.

In some ways, technology has united us like never before. In others, it constantly divides us.

Most of the news websites that have cropped up since the Internet’s inception present a strictly liberal or conservative viewpoint. What you see on cable news is 90 percent opinion and 10 percent news – a complete flip-flop from decades past. It seems the era of news neutrality is over.

Smaller, local newspapers still tend to be bipartisan affairs, mostly out of the necessity to cater to a mixed, localized audience. But when you can build your niche from people from all over the world, the narrowest viewpoints still find a sizable audience.

This ability of the Internet to validate even the most fringe views often blows political differences out of all proportion. And by empowering us to customize the information we see to such a granular level, it allows us to create ever narrower filter-bubbles in which to live. Jesse Singal, writing in an Op-Ed for the New York Times, put it nicely: “What social media is doing is slicing the salami thinner and thinner, as it were, making it harder even for people who are otherwise in general ideological agreement to agree on facts about news events.”

The Internet’s ‘ability to divide’ is seeping into our society and symptoms are popping up everywhere. Our politics have never been so partisan – and it’s not just the politics. The narratives spun by each side are like alternate realities. Flipping between CNN and Fox News will leave you with the frightening feeling you’ve just glimpsed a parallel world.

The sad part is that we are doing this to ourselves; technology is just the tool we’re using to dig the chasm that divides us. The scary part is that technology tends to accelerate cultural change, both the good and the bad; and at the pace we’re moving, the near future is not looking good. We’re facing total gridlock at best, a cultural civil war at worst.

The problem with the old world was that it was too easy to live in a localized bubble and care little for what was happening a world away. The problem with this new world is that it’s too easy to live in a filter-bubble of our own creation and forget to talk to the people sitting right next to us.

Eric Austin lives in China and writes about technology and community issues. He can be reached by email at

ERIC’S TECH TALK – Fake news: coming to a town near you

Honest, open, accountable journalism needs help to continue

by Eric W. Austin
Technical Advisor

In Lewiston, fake news is taking over the town.

Five candidates faced off in the town’s mayoral race back in November. According to local election rules, if no candidate cracks the majority with at least 50 percent, there is a second, run-off race between the top two candidates the following month. Ben Chin, a Democrat, was the clear favorite with 40 percent of the vote coming out of the November contest. His opponent, in second place with 29 percent, was Republican Shane Bouchard. The remaining 31 percent of the vote was split between the other three candidates. With no one achieving the required 50 percent majority, a run-off election was planned for early December.

Ben Chin

Chin, a progressive activist backed by the most popular politician in the country, Bernie Sanders, held a comfortable lead in initial polling. But in early December, something changed. News stories started popping up on social media that painted the Democrat in an unflattering light. One claimed Chin had allegedly called Lewiston voters a “bunch of racists” based on a series of leaked emails. Another reported his car had been towed because of “years of unpaid parking tickets.” All of the stories originated from a hitherto unknown Maine news website called the Maine Examiner.

It didn’t matter that the stories were misleading and inaccurate. As soon as a new article was uploaded to the website, links got posted to Facebook by various members of the Maine Republican Party. From there, the stories swiftly propagated through social media, as anything negative and partisan inevitably does.

In the end, Chin lost to Bouchard by 145 votes. It was all very dramatic, and inevitably led to questions about this new website that was suddenly breaking such startling scoops in the middle of a Lewiston mayoral election.

Shane Bouchard

Just who was the Maine Examiner? The Lewiston Sun Journal, the Boston Globe and others, in a bit of old-fashioned investigative journalism, decided to find out. The Journal has run a series of stories in the months since, from which much of this article is based, and they have found some very interesting information.

First was the problem that nobody seemed to know who ran the website or wrote the articles. The site uses a registration-masking service which hides the true identity of the owners — a reasonable privacy precaution for an individual, but curious practice for a business or news agency. Then there was the fact that none of the articles contain any bylines. They are simply credited to the generic moniker “Administrator.” The site’s “About Us” page lists no editor, no writers and no owners. It’s all very mysterious.

Recently, a big clue popped up from an unlikely source. A web developer in California, Tony Perry, heard about the controversy and decided to investigate. Perry did something very simple yet ingenious. He downloaded a bunch of the photos posted with the stories in question. Then he took a look at the pictures’ meta-data. This is invisible information that is stored with every computer file, and often contains things like owner name and the date of a file’s creation. Perry found that a number of the photos were created by someone named Jason Savage. Further, he found that one of these pictures had been uploaded to the Maine Examiner website just 14 minutes after it had been created by ‘Jason Savage.’ This suggested a close collusion between whoever Jason Savage was and the Maine Examiner website.

Then in late January, The Maine Beacon, a publication of the Maine People’s Alliance, published their own investigation into the mystery. Looking at publicly-accessible error logs for the Maine Examiner website revealed internal server addresses containing the username ‘jasonsavage207.’

Additionally, the website template used for the Maine Examiner was downloaded from a website on which was found a public profile for someone listed as ‘jasonsavage207,’ and this profile indicated the user’s account was last active on the same day that such a template was installed on the Maine Examiner’s website.

The evidence was in, and it was pretty damning. It was clear Jason Savage was intimately connected to the Maine Examiner website, but who exactly was Jason Savage?

A quick Google search points to one particular Maine resident who also happens to be the executive director of the Maine Republican Party. This conclusion is inescapable once you learn that his Instagram handle is ‘jasonsavage207’ and his Twitter name is ‘jsavage207.’

The latest wrinkle to this developing story came a few weeks ago when the Maine Democratic Party formally filed an ethics complaint against the Maine Republican Party.

But this debacle cannot be blamed entirely on unethical political partisans. It is a symptom of a larger problem affecting America and the world. Newspapers are closing their doors everywhere. The advertising dollars that used to fund them are moving instead to internet platforms like Google, Facebook and Twitter. But these platforms don’t do journalism. They are simply information warehouses.

That means America’s free press is shrinking. And with smaller newspapers across the country going out of business as their revenue dries up, something must fill the void they leave behind. More and more, what has come to fill that void are pseudo-news websites like the Maine Examiner. Such sites masquerade as news sources but are nothing but partisan propaganda.

Good journalism is not anonymous; it’s accountable. Good journalism does not celebrate partisan politics; it strives for balance and accuracy.

For the past two years, I’ve been honored to serve on the board of directors for The Town Line, and I’ve been impressed by the staff’s deep commitment to the traditional journalistic values of honesty, openness and accountability. It’s the type of attitude we should be celebrating in this world of viral, mile-a-minute news. Unfortunately, a small, free community newspaper is just the kind of institution that is suffering the most in this post-internet world.

Our Founding Father, Thomas Jefferson, once wrote, “A properly functioning democracy depends on an informed electorate.” But an informed electorate is dependent on the work of dedicated journalists committed to providing accurate information to the American public.

And the moral to this story? Support your local paper lest your town too becomes a victim of fake news.

TECH TALK: My deep, dark journey into political gambling


by Eric W. Austin

I opened the door and stepped hesitantly into the dimly lit room. Curtains covered all the windows. The only light came from a half-dozen computer screens glowing menacingly in the darkness. A scary-looking German Shepherd slumped in one corner. She growled low in her throat as I came in, and then went back to scratching at imaginary fleas. She had seen it all before: just another poor sucker thinking it was possible to predict the future.

But this wasn’t some hole-in-the-wall gambling den in a seedy part of Augusta. It was my office at my house here in China, Maine. I sat down at my desk and pulled up the website Would I be up or down today?

PredictIt is a different kind of gambling website. Instead of betting on sports events or dog races, you bet on events happening in politics. For example, the Friday before the recent government shutdown, I pulled out of the “Will the government be shutdown on January 22?” market after quadrupling my initial investment. I got into the market two weeks earlier when I thought shares for ‘Yes’ were severely undervalued at only 16¢ a share. When I exited the market on Friday, my 20 shares were valued at 69¢ each. I should have held the line, but still not a bad return on investment in only two weeks. When in doubt, bet on the incompetence of the American Congress.

Called the “stock market for politics,” PredictIt is an experimental political gambling website created by Victoria University of Wellington, New Zealand. They work in partnership with more than 50 universities across the world, including the American colleges of Harvard, Duke and Yale.

Why would a bunch of academics be interested in political gambling? They’re studying a psychological phenomenon called “the wisdom of the crowd.” This is a theory that postulates that a prediction derived from averaging the opinions of a large group of diverse individuals is often better than the prediction from a single expert.

The way it works on PredictIt is pretty simple. Political questions are posed which have a binary response, usually ‘Yes’ or ‘No’. Shares in either option cost between 1¢ and 100¢ (or $1). The value of shares is determined by the supply and demand of each market. In other words, if a lot of people are buying shares in the ‘Yes’ option, those shares will increase in value, and ‘No’ shares will decrease.

This set up allows one to quickly look at a share price and know how likely that particular prediction is of coming true. Will Trump be impeached in his first term? Since shares are currently at 37¢, that means the market thinks there’s a 37 percent chance of that happening. I own 15 ‘Yes’ shares in this market. Shares have increased by 4¢ (or 4 percent) since I entered the market several months ago (from 33¢ to 37¢), so my initial investment of $4.90 has increased by 65¢ to $5.55 as of today.

If you can think of a question related to politics, there’s likely a market for it on PredictIt. Will North Korea compete in the 2018 Winter Olympics? Currently likely at 94 percent. Who will be the 2020 Democratic nominee for president? At the moment, Bernie Sanders and Kamala Harris hold the top spots. How many senate seats will the GOP hold after the mid-term elections? “49 or fewer” is the most likely answer according to investors on PredictIt.

What are the chances that events over the next year will change things up? There’s no market for that question on PredictIt, but I’d say it’s at least 100 percent. Of course, that’s exactly what makes the game so exciting!

I first entered the world of political gambling back in May. I’d become a bit of a news junkie during the 2016 election (Donald Trump is the news equivalent of heroin), and was looking for something to give meaning to the endless hours I spent following the machinations in Washington. Initially, I started with just $20. Later, I added another $25 for a total account investment of $45. I lost $8 on a couple of bets early on, and have spent the past six months trying to make up for the losses. This government shutdown drama put me back on top. According to my trade history, after more than 169 bets and minus any trading fees, I’m currently up by $9.96.

Okay, so the IRS is unlikely to come knocking on my door when I don’t report it on my taxes in April. Still, it feels good to be back in the black.

Eric Austin is a writer, technical consultant, and news junkie living in China, Maine. He can be contacted by email at

TECH TALK: Does the future spell the end of local news?

Eric’s Tech Talk

by Eric W. Austin
Writer and Technical Consultant

In August of 1981, an upstart cable TV station began broadcasting these slick new videos set to music. They called it “music television.”

The first music video to air on the new channel was the Buggles’ song “Video Killed the Radio Star.” It was supposed to herald the end of radio’s dominance and introduce the world to television as a new musical medium. Instead, nearly 40 years later, music can hardly be found on MTV and radio is still going strong.

The song’s theme, a lament about the old technology of radio being supplanted by the new technology of television, is playing out again with the Internet and traditional print journalism. Sadly, the Buggles’ song may turn out to be more prophetic this time around.

The newspaper industry is currently in a crisis, and even a little paper like The Town Line is feeling the hurt.

Advertising revenue, the primary source of income for newspapers the world over, has been steadily falling since the early 2000s. Between 2012 and 2016, newspaper ad revenues dropped by 21 percent, only slightly better than the previous five years where they dropped 24 percent. Overall, in the first 15 years of the new millennium, print advertising revenue fell to less than a third of what it was pre-Internet, from $60 billion to just $20 billion globally. And, unfortunately, that trend looks to continue in the years ahead.

On the positive side, circulation numbers are up for most newspapers, and public interest has never been higher, but income from subscriptions has not been enough to compensate for the lost advertising.

For small papers like The Town Line, which offers the paper for free and receives little income from subscriptions, this is an especially hard blow: more people are reading the paper, and there’s a great demand for content, but there is also less income from advertising to cover operating costs.

In the late ‘90s, The Town Line employed eight people: an editor, assistant editor, graphic artist, receptionist, bookkeeper and three sales people. Weekly issues often ran to 24 pages or more. Today that staff has been reduced to just three part-time employees, and the size of the paper has fallen to just 12 pages. There simply isn’t enough advertising to support a bigger paper.

People are more engaged than ever: they want to understand the world around them like never before. But as this business model, dependent on income from advertisers, continues to decay, without finding support from other sources, there is a real danger of losing the journalistic spirit that has played such an important role in our American experiment.

The reasons this is happening are fairly easy to explain. Businesses who once advertised exclusively in local papers have moved en masse to global platforms like Facebook and Google. These advertising platforms can offer the same targeted marketing once only possible with local publications, and they have the financial muscle to offer pricing and convenience that smaller publications cannot match.

This combination of local targeting and competitive pricing has caused a tidal wave of advertising to move from local papers to global corporations like Google, Facebook and Twitter instead. In the last decade, thousands of newspapers all across the nation have closed their doors. Often the first to succumb are small, local papers that have a limited geographic audience and fewer financial resources.

Like The Town Line.

There’s also been a transition in media coverage, from local issues to ones that have more of a national, or even global, audience. Websites are globally accessible, whereas traditional papers tend to have limited geographic range. Most online advertising pays on a ‘per-click’ basis, and a news story about China, Maine, will never get the same number of clicks as one about Washington, DC.

That smaller newspapers have been some of the hardest hit only makes this problem worse, as the remaining media companies tend toward huge conglomerates that are more concerned with covering national issues that have broad appeal, rather than local stories which may only be of interest to a small, localized audience.

This means that local issues are receiving less coverage, and as a result average Americans have fewer tools to make informed decisions about their communities.

When local journalism dies, what rises up to replace it? I think the answer is pretty clear: whichever website is willing to publish the most salacious stories generating the highest click-count – with little regard to proper sourcing or journalistic ethics.

Essentially, we’ve traded journalistic integrity for clickbait content.

Only a few weeks ago, the Bangor Daily News ran a story about a recent local election that may have been decided by a local ‘news’ site with no problem running rumor as news, and political partisans only too happy to propagate the dubious links through social media. Examples like this will only become more common in the years to come.

If we don’t support the traditional values of honesty, integrity and unbiased reporting that have been the bedrock of American journalism for two centuries, we may not like what rises up to replace it.

With advertising revenues hitting all-time lows nationwide, and looking to worsen in the years ahead, newspapers increasingly must rely on support from their readers to make ends meet. Since advertisers have abandoned them, it’s now up to ‘us’ to support local papers like The Town Line.

In this New Year, make a resolution to support your local newspaper. If you’re a business, help to reverse the trend by advertising in local publications. If you’re an individual, consider becoming a member of The Town Line. A small donation of $10 a month can make a world of difference. Best of all, since The Town Line is a 501(c)(3) nonprofit, private foundation, all donations are fully tax deductible!

To fulfill the American promise of an informed public, and fight the growing trend of clickbait sensationalism that has come to permeate much of the web, we must support local reporting more than ever. The time to act is now, before journalism loses another warrior in the fight for free expression.

Don’t let our generation be the one in which local journalism dies!

Eric Austin lives in China, Maine and writes about technology and community issues. He can be reached at

TECH TALK: Net Neutrality goes nuclear


by Eric Austin
Computer Technical Advisor

Do you like your cable TV service? I hope so, because your internet service is about to get a whole lot more like it.

On Thursday last week, the Federal Communications Commission (FCC), headed up by Trump appointee and former Verizon employee Ajit Pai, voted 3-2, along party lines, to repeal Obama-era rules that prevented internet providers from favoring some internet traffic over others.

You know how the cable company always puts the one channel you really want in a higher tier, forcing you to pay for the more expensive package even though you don’t like any of the other channels?

That’s right. Nickel-and-diming is coming to an internet service near you!

What does this really mean for you? I’m so glad you asked, but I’m afraid my answer will not make you happy.

It means that huge telecommunication companies like Comcast and TimeWarner now have the power to determine which internet services you have access to.

If you have a niche interest you pursue on the internet, you’re likely to be affected. Those websites with smaller audiences will have their bandwidth throttled unless you, the consumer, begin paying your Internet Service Provider (ISP) an extra fee.

That means you, Miniature Train Collector! That means you, Bass Fisherman! That means you, Foot-Fetish Fanatic!

It means pay-to-play is coming to the internet. When ISPs are allowed to favor some traffic over others, the Almighty Dollar will determine the winners and losers.

It means smaller newspapers like The Town Line, already suffering in a climate of falling ad revenue and competition from mega-sites like Buzzfeed and Facebook, will be forced to struggle even harder to find an audience.

Remember when chain super-stores like WalMart and Lowe’s forced out all the mom and pop stores? Remember when Starbucks and Subway took over Main Street?

That’s about to happen to the internet.

This move puts more control in the hands of mega-corporations – and in the hands of the men who own them. Do you want to choose your ISP based on where you fall on the political divide? What if Rupert Murdoch, owner of Fox News, bought Fairpoint or Spectrum? Which viewpoints do you think he would be likely to favor? Which websites would see their traffic throttled? What about George Soros, the billionaire liberal activist? No matter which side of the political divide you come down on, this is bad news for America.

In 2005, a little website called YouTube launched. It was competing against an internet mega-giant called Google Video. Two years later Google bought the website for $1.65 billion. Today, YouTube is one of the most popular websites on the internet.

That won’t happen in the future. Under the new rules, Google can simply use its greater capital to bribe ISPs to squash competitor traffic. YouTube would have died on the vine. In fact, that’s exactly what’s likely to happen to YouTube’s competitors now. Oh, the irony!

Twitter, YouTube, Facebook — none of these sites would be successes today without the level-playing field the internet has enjoyed during its first two decades of life.

So this is now the future of the internet. The barrier to innovation and success just became greater for the little guy. Is that really what the web needs?

These are dangerous days we live in, with freedom and democracy apparently assailed from all sides. The internet has been a beacon of hope in these troubled times, giving voice to the voiceless and leveling the playing field in a game that increasingly favors the powerful.

This decision by the FCC under Trump is a huge boon to the power of mega-corporations, telecommunications companies, and established monopolies, but it’s a flaming arrow to the heart of everyday, average Americans and future entrepreneurs. America will be the poorer because of it.

If there’s anything left of the revolutionary spirit that founded America, it lives on in the rebellious noise of the World Wide Web. Let’s not squash it in favor of giving more money and control to big corporations. America has had enough of that. Leave the internet alone!

Eric Austin is a writer and technical consultant living in China, Maine. He writes about technical and community issues and can be contacted at

Further reading:

TECH TALK: Are you human or robot? The surprising history of CAPTCHAs


by Eric W. Austin

We’re all familiar with it. Try to log into your favorite website, and you’re likely to be presented with a question: Are you human or a robot? Then you might be asked to translate a bit of garbled text or pick from a set of presented images. What’s this all about?

There’s an arms race going on between website owners and internet spam bots. Spam bots want to log into your site like a regular human, and then leave advertising spam comments on all your pages. Website admins naturally want to stop this from happening, as we have enough ordinary humans leaving pointless comments already.

Although several teams have claimed ownership of inventing the technique, the term ‘CAPTCHA’ was first coined by a group of engineers at Carnegie Mellon University in 2001. They were looking for a way to allow websites to distinguish between live humans and the growing multitude of spam bots pretending to be human. They came up with the idea of showing a user distorted images of garbled words that could be understood by a real person but would confound a computer. It was from this idea that the ubiquitous CAPTCHA emerged.

CAPTCHA is an acronym that stands for ‘Completely Automated Public Turing test to tell Computers and Humans Apart.’

Around this same time, The New York Times was in the process of digitizing their back issues. They were employing a fairly new computer technology called Optical Character Recognition (OCR), which is the process of scanning a page of type and turning it into searchable text. Prior to this technology, a scanned page of text was simply an image and not searchable or capable of being cataloged based on its content.

Old newsprint can be difficult to read for computers, especially since the back catalog of The New York Times stretches back more than 100 years. If the ink has smeared, faded or is otherwise obscured, a computer could fail to correctly interpret the text.

The New York Times got the brilliant idea of using these difficult words as CAPTCHA images, utilizing the power of internet users to read words a computer had failed to recognize. The project was reinvented as ‘reCAPTCHA.’

In 2009, Google bought the company responsible for reCAPTCHA and began using it to help digitize old books for their Google Books project. Whenever their computers run into trouble interpreting a bit of text, a scan of those words is uploaded to the reCAPTCHA servers and millions of internet users share in the work of decoding old books for Google’s online database.

I bet you didn’t realize you’re working for Google every time you solve one of those garbled word puzzles!

Of course, artificial intelligence and OCR technology has improved a lot in the years since. Now you are more likely to be asked to choose those images that feature street signs, rather than to solve a bit of distorted text. In this way, Google is using internet users to improve its artificial intelligence image recognition.

Soon computers will be smart enough to solve these picture challenges as well. In fact, the latest version of CAPTCHA barely requires any input from the internet user at all. If you have come to a webpage and been asked to check a box verifying that, “I’m not a robot,” and wondered how this can possibly filter out spam bots, you’re not alone. There’s actually a lot more going on behind that simple checkbox.

Invented by Google, and called “No CAPTCHA reCAPTCHA,” the new system employs an invisible algorithm behind the scenes that executes when you check the box. This algorithm analyzes your recent online behavior in order to determine if you are acting like a human or a bot. If it determines you might be a bot, you’ll get the familiar pop-up, asking you to choose from a series of images in order to verify your humanity.

This internet arms race is a competition between artificial intelligence’s efforts to pass as human and a website admin’s attempt to identify them. The CAPTCHA will continue to evolve as the artificial intelligence of spam bots increases to keep pace.

It’s an arms race we’re bound to lose in the end. But until then, the next time you’re forced to solve a garbled word puzzle, perhaps it will help ease the tedium to remember you’re helping preserve the world’s literary past every time you do!

TECH TALK: Life & Death of the Microchip

Examples of early vacuum tubes. (Image credit: Wikimedia Commons)


by Eric Austin
Computer Technical Advisor

The pace of technological advancement has a speed limit and we’re about to slam right into it.

The first electronic, programmable, digital computer was designed in 1944 by British telephone engineer Tommy Flowers, while working in London at the Post Office Research Station. Named the Colossus, it was built as part of the Allies’ wartime code-breaking efforts.

The Colossus didn’t get its name from being easy to carry around. Computers communicate using binary code, with each 0 or 1 represented by a switch that is either open or closed, on or off. In 1944, before the invention of the silicon chip that powers most computers today, this was accomplished using vacuum-tube technology. A vacuum tube is a small, vacuum-sealed, glass chamber which serves as a switch to control the flow of electrons through it. Looking much like a complicated light-bulb, vacuum tubes were difficult to manufacture, bulky and highly fragile.

Engineers were immediately presented with a major problem. The more switches a computer has, the faster it is and the larger the calculations it can handle. But each switch is an individual glass tube, and each must be wired to every other switch on the switchboard. This means that a computer with 2,400 switches, like the Colossus, would need 2,400 individual wires connecting each switch to every other, or a total of almost six million wires. As additional switches are added, the complexity of the connections between components increases exponentially.

This became known as the ‘tyranny of numbers’ problem, and because of it, for the first two decades after the Colossus was introduced, it looked as though computer technology would forever be out of reach of the average consumer.

Then two engineers, working separately in California and Texas, discovered a solution. In 1959, Jack Kilby, working at Texas Instruments, submitted his design for an integrated circuit to the US patent office. A few months later, Robert Noyce, founder of the influential Fairchild Semiconductor research center in Palo Alto, California, submitted his own patent. Although they each approached the problem differently, it was the combination of their ideas that resulted in the microchip we’re familiar with today.

The advantages of this new idea, to print microscopic transistors on a wafer of semi-conducting silicon, were immediately obvious. It was cheap, could be mass produced, and most importantly, it’s performance was scalable: as our miniaturization technology improved, we were able to pack more transistors (switches) onto the same chip of silicon. A chip with a higher number of transistors resulted in a more powerful computer, which allowed us to further refine our fabrication process. This self-fed cycle of progress is what has fueled our technological advancements for the last 60 years.

Gordon Moore, who, along with Robert Noyce, later founded the microchip company Intel, was the first to understand this predictable escalation in computer speed and performance. In a paper he published in 1965, Moore observed that the number of components we could print on an integrated circuit was doubling every year. Ten years later the pace had slowed somewhat and he revised his estimate to doubling every two years. Nicknamed “Moore’s Law,” it’s a prediction that has remained relatively accurate ever since.

This is why every new iphone is faster, smaller, and more powerful than the one from the year before. In 1944, the Colossus was built with 2,400 binary vacuum tubes. Today the chip in your smart phone possesses something in the neighborhood of seven billion transistors. That’s the power of the exponential growth we’ve experienced for more than half a century.

But this trend of rapid progress is about to come to an end. In order to squeeze seven billion components onto a tiny wafer of silicone, we’ve had to make everything really small. Like, incomprehensibly small. Components are only a few nanometers wide, with less than a dozen nanometers between them. For some comparison, a sheet of paper is about 100,000 nanometers thick. We are designing components so small that they will soon be only a few atoms across. At that point electrons begin to bleed from one transistor into another, because of a quantum effect called ‘quantum tunneling,’ and a switch that can’t be reliably turned off is no switch at all.

Experts differ on how soon the average consumer will begin to feel the effects of this limitation, but most predict we have less than a decade to find a solution or the technological progress we’ve been experiencing will grind to a stop.

What technology is likely to replace the silicon chip? That is exactly the question companies like IBM, Intel, and even NASA are racing to answer.

IBM is working on a project that aims to replace silicon transistors with ones made of carbon nanotubes. The change in materials would allow manufacturers to reduce the space between transistors from 14 nanometers to just three, allowing us to cram even more transistors onto a single chip before running into the electron-bleed effect we are hitting with silicon.

Another idea with enormous potential, the quantum computer, was first proposed back in 1968, but has only recently become a reality. Whereas the binary nature of our current digital technology only allows for a switch to be in two distinct positions, on or off, the status of switches in a quantum computer are determined by the superpositional states of a quantum particle, which, because of the weirdness of quantum mechanics, can be in the positions of on, off or both – simultaneously! The information contained in one quantum switch is called a ‘qubit,’ as opposed to the binary ‘bit’ of today’s digital computers.

At their Quantum Artificial Intelligence Laboratory (QuAIL) in Silicon Valley, NASA, in partnership with Google Research and a coalition of 105 colleges and universities, has built the D-Wave 2X, a second-generation, 1,097-qubit quantum computer. Although it’s difficult to do a direct qubit-to-bit comparison because they are so fundamentally different, Google Research has released some data on its performance. They timed how long it takes the D-Wave 2X to do certain high-level calculations and compared the timings with those of a modern, silicon-based computer doing the same calculations. According to their published results, the D-Wave 2X is 100 million times faster than the computer on which you are currently reading this.

Whatever technology eventually replaces the silicon chip, it will be orders of magnitude better, faster and more powerful than what we have today, and it will have an unimaginable impact on the fields of computing, space exploration and artificial intelligence – not to mention the ways in which it will transform our ordinary, everyday lives.

Welcome to the beginning of the computer age, all over again.

TECH TALK: Bug hunting in the late 20th century

(image credit: XDanielx – public domain via Wikimedia Commons)


by Eric W. Austin
Computer Technical Advisor

The year is 1998. As the century teeters on the edge of a new millennium, no one can stop talking about Monica Lewinsky’s dress. September 11, 2001, is still a long ways off, and the buzz in the tech bubble is all about the Y2K bug.

I was living in California at the time, and one of my first projects, in a burgeoning technical career, was working on this turn of the century technical issue. Impacting the financial sector especially hard, which depends upon highly accurate transactional data, the Y2K bug forced many companies to put together whole departments whose only responsibility was to deal with it.

I joined a team of about 80 people as a data analyst, working directly with the team leader to aggregate data on the progress of the project for the vice president of the department.

Time Magazine cover from January 1999

Born out of a combination of the memory constraints of early computers in the 1960s and a lack of foresight, the Y2K bug was sending companies into a panic by 1998.

In the last decade, we’ve become spoiled by the easy availability of data storage. Today, we have flash drives that store gigabytes of data and can fit in our pocket, but in the early days of computing data-storage was expensive, requiring huge server rooms with 24-hour temperature control. Programmers developed a number of tricks to compensate. Shaving off even a couple of bytes from a data record could mean the difference between a productive program and a crashing catastrophe. One of the ways they did this was by storing dates using only six digits – 11/09/17. Dropping the first two digits of the year from hundreds of millions of records meant significant savings in expensive data-storage.

This convention was widespread throughout the industry. It was hard-coded into programs, assumed in calculations, and stored in databases. Everything had to be changed. The goal of our team was to identify every instance where a two-digit year was used, in any application, query or table, and change it to use a four-digit year instead. This was more complicated than it sounds, as many programs and tables had interdependencies with other programs and tables, and all these relationships had to be identified first, before changes could be made. Countrywide Financial, the company that hired me, was founded in 1969 and had about 7,000 employees in 1998. We had 30 years of legacy code that had to be examined line by line, tested and then put back into production without breaking any other functionality. It was an excruciating process.

It was such a colossal project there weren’t enough skilled American workers to complete the task in time, so companies reached outside the U.S. for talent. About 90 percent of our team was from India, sponsored on a special H-1B visa program expanded by President Bill Clinton in October of ’98, specifically to aid companies in finding enough skilled labor to combat the Y2K bug.

For a kid raised in rural New England, this was quite the culture shock, but I found it fascinating. The Indians spoke excellent English, although for most of them Hindi was their first language, and they were happy to answer my many questions about Indian culture.

I immediately became good friends with my cube-mate, an affable young Indian man and one of the team leaders. On my first day, he told me excitedly about being recently married to a woman selected by his parents while he had been working here in America. He laughed at my shock after explaining he had spoken with his bride only once – by telephone – before the wedding.

About a month into my contract, my new friend invited me to share dinner with him and his family. I was excited for my first experience of true Indian home-cooking.

By and large, Californians aren’t the most sociable neighbors. Maybe it’s all that time stuck in traffic, but it’s not uncommon to live in an apartment for years and never learn the name of the person across the hall. Not so in Srini’s complex!

Srini lived with a number of other Indian men and their families, also employed by Countrywide, in a small apartment complex in Simi Valley, about 20 minutes down the Ronald Reagan Freeway from where I lived in Chatsworth, on the northwest side of Los Angeles County.

I arrived in my best pressed shirt, and found that dinner was a multi-family affair. At least a dozen other people, from other Indian families living in nearby apartments – men, women, and children – gathered in my friend’s tiny living room.

The men lounged on the couches and chairs, crowded around the small television, while the women toiled in the kitchen, gossiping in Hindi and filling the tiny apartment with the smells of curry and freshly baking bread.

At dinner, I was surprised to find that only men were allowed to sit around the table. Although they had just spent the past two hours preparing the meal, the women sat demurely in chairs placed against the walls of the kitchen. When I offered to make room for them, Srini politely told me they would eat later.

I looked in vain for a fork or a spoon, but there were no utensils. Instead, everyone ate with their fingers. Food was scooped up with a thick, flatbread called Chapati. Everything was delicious.

Full of curry, flatbread, and perhaps a bit too much Indian beer, Srini and his wife walked me back to my car after dinner. Unfortunately, when Srini’s wife gave me a slight bow of farewell, a tad too eager to demonstrate my cultural savoir-faire, I mistook her bow for a French la bise instead. Bumped foreheads and much furious blushing resulted. Later, I had to apologize to Srini for attempting to kiss his wife. He thought it was hilarious.

Countrywide survived the Y2K bug, although the company helped bring down the economy a decade later. Srini moved on to other projects within the company, as did I. The apocalypticists would have to wait until 2012 to predict the end of the world again, but the problems – and opportunities – created by technology have only grown in the last 17 years: driverless cars, Big Data, and renegade A.I. – to deal with these problems, and to exploit the opportunities they open up for us, it will take a concerted effort from the brightest minds on the planet.

Thankfully, they’re already working on it.

Here at Tech Talk we take a look at the most interesting – and beguiling – issues in technology today. Eric can be reached at, and don’t forget to check out previous issues of the paper online at

TECH TALK: Virtual Money – The next evolution in commerce


by Eric Austin
Technical Consultant

Commerce began simply enough. When roving bands of hardly-human migratory hunters met in the Neolithic wilderness, it was only natural that they compare resources and exchange goods. The first trades were simple barters: two beaver skins and a mammoth tusk for a dozen arrowheads and a couple of wolf pelts.

As people settled down and built cities, there was a need to standardize commerce. In ancient Babylon, one of our earliest civilizations, barley served as a standard of measurement. The smallest monetary unit, the ‘shekel,’ was equal to 180 grains of barley.

The first coins appeared not long after. Initially, a coin was worth the value of the metal it was minted from, but eventually its intrinsic value separated from its representational value. When the state watered down the alloy of a gold coin with baser metals, such as tin or copper, they invented inflation. With the introduction of paper money, first in China in the 7th century CE and later in medieval Europe, the idea of intrinsic worth was done away with entirely for a representational value dictated by the state.

In the 19th and 20th centuries, corporations took over from the state as the main drivers in the evolution of commerce. Then, in the 1960s, the foundations of e-commerce were laid down with the establishment of the Electronic Data Interchange (EDI). The EDI defines the standards for transactions between two electronic devices on a network. It was initially developed out of Cold War military strategic thinking, specifically the need for logistical coordination of transported goods during the 1948 Berlin Airlift.

Worry about the security of such communication kept it from being used for financial transactions until 1994, when Netscape, an early browser technology company and the foundation of browsers such as Firefox, invented Secure Socket Layers (SSL) encryption, a cryptographic protocol that provides communications security for computers over a network. After this breakthrough, various third parties began providing credit card processing services. A short time later, Verisign developed the first unique digital identifier, or SSL certificate, to verify merchants. With that our current system for online commerce was complete.

So why is Internet security still such a constant worry? Part of the problem is embedded in the structure of the Internet itself. The Internet is first and foremost designed to facilitate communication, and its openness and decentralized structure is paradoxical to the financial sector, which depends on the surety of a centralized authority overseeing all transactions. Most of our existing security issues on the internet are a consequence of these diametrically opposed philosophies.

Cryptocurrencies are the result of thinking about money with an Internet mindset. Classified as a virtual currency, cryptocurrencies such as Bitcoin aim to solve a number of problems present in our current online transactional system by embracing the decentralized structure of the Internet and by lifting some novel concepts from cryptography, the study of encryption and code-breaking.

Introduced in 2009, Bitcoin was the world’s first virtual currency. Bitcoin tackles the security issues of our current system by decentralizing its transaction data. Bitcoin’s public ledger is called a ‘blockchain,’ with each block in the chain representing a financial transaction. The database is designed to prevent data alteration by building references to other transactions into each record. To alter one record, a hacker would need to alter every other record that references it in order to avoid detection.

And since the database is maintained by every computer participating in that chain of transactions, any data altered on one computer would be immediately detected by every other computer on the network. This ‘decentralized data’ concept eliminates the big weakness in our current system. Today, the control of data is concentrated in a few centralized institutions, and if the security of any one of those institutions is penetrated, the entire system becomes compromised.

Beyond creating a secure financial transaction system for the World Wide Web, another goal of cryptocurrencies is to reduce or even eliminate financial fees by removing the need for a middleman overseeing the transaction. Since no centralized banking authority is necessary to track transactions, many of the costs associated with the involvement of banking institutions disappear. This has made Bitcoin the preferred currency for moving money around the world, as it can be done with a minimum of bureaucratic fees. Western Union currently charges 7-8 percent transfer cost per $100. For migrant workers sending money home to their families, that’s a big hit.

With no personal, identifying information recorded as part of a Bitcoin transaction, it provides a level of anonymity not possible with our current system. However, as pointed out by MIT researchers, this anonymity only extends as far as the merchant accepting the transaction, who may still tag transaction IDs with personal customer info.

The anonymous nature of Bitcoin transactions is a boon to the security of consumers, but it presents a real problem for law enforcement. Bitcoin has become the favored currency for criminal activity. Kidnappers frequently insist on payment in Bitcoin. The WannaCry virus that attacked 200,000 computers in 150 countries earlier this year required victims to pay in Bitcoin.

The value of Bitcoin has steadily increased since it was introduced almost 10 years ago. In January 2014, one bitcoin was worth $869.61. As I write this in October 2017, that same bitcoin is valued at $5,521.32, an increase of more than 500 percent in just three years. With approximately 16 million bitcoins in circulation, the total current value of the Bitcoin market is almost $92 billion. The smallest unit of Bitcoin is called a ‘satoshi,’ worth 1 millionth of a bitcoin.

WannaCry isn’t the only cyberthreat to leverage Bitcoin either. Since Bitcoin is designed to reward computers which keep its database updated with new bitcoins, some malicious programmers have created viruses that hijack your computer in order to force it to mine bitcoins. Most people are not even aware this has happened. There may simply be a process running in the background, slowing down your PC, and quietly depositing earned bitcoins into a hacker’s digital wallet.

The benefits to be gained by this revolution in commerce – security, anonymity, and the elimination of the need for a financial middleman – are great, but the risks are not to be dismissed either. Even as the anonymous nature of cryptocurrencies provide the consumer with greater security and lower costs, it creates a haven for criminals and makes it more difficult for law enforcement to track cybercrime.

Whether Bitcoin sticks around or disappears to be replaced with something else, the philosophy and technology behind it will transform the financial sector in the decades to come. Our current internet commerce model is a slapdash attempt to stick an old system onto the new digital world of the Internet and cannot last. The road to a new financial reality is bound to be a rocky one, as banking institutions are not likely to accept the changes – and the recession of their influence – easily. But, as shown by the recent Equifax hack, which exposed the personal information of 143 million Americans, maybe trusting our financial security to a few, centralized institutions isn’t such a great idea. And maybe cryptocurrencies are part of the answer.