ERIC’S TECH TALK – Fake news: coming to a town near you

Honest, open, accountable journalism needs help to continue

by Eric W. Austin
Technical Advisor

In Lewiston, fake news is taking over the town.

Five candidates faced off in the town’s mayoral race back in November. According to local election rules, if no candidate cracks the majority with at least 50 percent, there is a second, run-off race between the top two candidates the following month. Ben Chin, a Democrat, was the clear favorite with 40 percent of the vote coming out of the November contest. His opponent, in second place with 29 percent, was Republican Shane Bouchard. The remaining 31 percent of the vote was split between the other three candidates. With no one achieving the required 50 percent majority, a run-off election was planned for early December.

Ben Chin

Chin, a progressive activist backed by the most popular politician in the country, Bernie Sanders, held a comfortable lead in initial polling. But in early December, something changed. News stories started popping up on social media that painted the Democrat in an unflattering light. One claimed Chin had allegedly called Lewiston voters a “bunch of racists” based on a series of leaked emails. Another reported his car had been towed because of “years of unpaid parking tickets.” All of the stories originated from a hitherto unknown Maine news website called the Maine Examiner.

It didn’t matter that the stories were misleading and inaccurate. As soon as a new article was uploaded to the website, links got posted to Facebook by various members of the Maine Republican Party. From there, the stories swiftly propagated through social media, as anything negative and partisan inevitably does.

In the end, Chin lost to Bouchard by 145 votes. It was all very dramatic, and inevitably led to questions about this new website that was suddenly breaking such startling scoops in the middle of a Lewiston mayoral election.

Shane Bouchard

Just who was the Maine Examiner? The Lewiston Sun Journal, the Boston Globe and others, in a bit of old-fashioned investigative journalism, decided to find out. The Journal has run a series of stories in the months since, from which much of this article is based, and they have found some very interesting information.

First was the problem that nobody seemed to know who ran the website or wrote the articles. The site uses a registration-masking service which hides the true identity of the owners — a reasonable privacy precaution for an individual, but curious practice for a business or news agency. Then there was the fact that none of the articles contain any bylines. They are simply credited to the generic moniker “Administrator.” The site’s “About Us” page lists no editor, no writers and no owners. It’s all very mysterious.

Recently, a big clue popped up from an unlikely source. A web developer in California, Tony Perry, heard about the controversy and decided to investigate. Perry did something very simple yet ingenious. He downloaded a bunch of the photos posted with the stories in question. Then he took a look at the pictures’ meta-data. This is invisible information that is stored with every computer file, and often contains things like owner name and the date of a file’s creation. Perry found that a number of the photos were created by someone named Jason Savage. Further, he found that one of these pictures had been uploaded to the Maine Examiner website just 14 minutes after it had been created by ‘Jason Savage.’ This suggested a close collusion between whoever Jason Savage was and the Maine Examiner website.

Then in late January, The Maine Beacon, a publication of the Maine People’s Alliance, published their own investigation into the mystery. Looking at publicly-accessible error logs for the Maine Examiner website revealed internal server addresses containing the username ‘jasonsavage207.’

Additionally, the website template used for the Maine Examiner was downloaded from a website on which was found a public profile for someone listed as ‘jasonsavage207,’ and this profile indicated the user’s account was last active on the same day that such a template was installed on the Maine Examiner’s website.

The evidence was in, and it was pretty damning. It was clear Jason Savage was intimately connected to the Maine Examiner website, but who exactly was Jason Savage?

A quick Google search points to one particular Maine resident who also happens to be the executive director of the Maine Republican Party. This conclusion is inescapable once you learn that his Instagram handle is ‘jasonsavage207’ and his Twitter name is ‘jsavage207.’

The latest wrinkle to this developing story came a few weeks ago when the Maine Democratic Party formally filed an ethics complaint against the Maine Republican Party.

But this debacle cannot be blamed entirely on unethical political partisans. It is a symptom of a larger problem affecting America and the world. Newspapers are closing their doors everywhere. The advertising dollars that used to fund them are moving instead to internet platforms like Google, Facebook and Twitter. But these platforms don’t do journalism. They are simply information warehouses.

That means America’s free press is shrinking. And with smaller newspapers across the country going out of business as their revenue dries up, something must fill the void they leave behind. More and more, what has come to fill that void are pseudo-news websites like the Maine Examiner. Such sites masquerade as news sources but are nothing but partisan propaganda.

Good journalism is not anonymous; it’s accountable. Good journalism does not celebrate partisan politics; it strives for balance and accuracy.

For the past two years, I’ve been honored to serve on the board of directors for The Town Line, and I’ve been impressed by the staff’s deep commitment to the traditional journalistic values of honesty, openness and accountability. It’s the type of attitude we should be celebrating in this world of viral, mile-a-minute news. Unfortunately, a small, free community newspaper is just the kind of institution that is suffering the most in this post-internet world.

Our Founding Father, Thomas Jefferson, once wrote, “A properly functioning democracy depends on an informed electorate.” But an informed electorate is dependent on the work of dedicated journalists committed to providing accurate information to the American public.

And the moral to this story? Support your local paper lest your town too becomes a victim of fake news.

TECH TALK: My deep, dark journey into political gambling

ERIC’S TECH TALK

by Eric W. Austin

I opened the door and stepped hesitantly into the dimly lit room. Curtains covered all the windows. The only light came from a half-dozen computer screens glowing menacingly in the darkness. A scary-looking German Shepherd slumped in one corner. She growled low in her throat as I came in, and then went back to scratching at imaginary fleas. She had seen it all before: just another poor sucker thinking it was possible to predict the future.

But this wasn’t some hole-in-the-wall gambling den in a seedy part of Augusta. It was my office at my house here in China, Maine. I sat down at my desk and pulled up the website PredictIt.org. Would I be up or down today?

PredictIt is a different kind of gambling website. Instead of betting on sports events or dog races, you bet on events happening in politics. For example, the Friday before the recent government shutdown, I pulled out of the “Will the government be shutdown on January 22?” market after quadrupling my initial investment. I got into the market two weeks earlier when I thought shares for ‘Yes’ were severely undervalued at only 16¢ a share. When I exited the market on Friday, my 20 shares were valued at 69¢ each. I should have held the line, but still not a bad return on investment in only two weeks. When in doubt, bet on the incompetence of the American Congress.

Called the “stock market for politics,” PredictIt is an experimental political gambling website created by Victoria University of Wellington, New Zealand. They work in partnership with more than 50 universities across the world, including the American colleges of Harvard, Duke and Yale.

Why would a bunch of academics be interested in political gambling? They’re studying a psychological phenomenon called “the wisdom of the crowd.” This is a theory that postulates that a prediction derived from averaging the opinions of a large group of diverse individuals is often better than the prediction from a single expert.

The way it works on PredictIt is pretty simple. Political questions are posed which have a binary response, usually ‘Yes’ or ‘No’. Shares in either option cost between 1¢ and 100¢ (or $1). The value of shares is determined by the supply and demand of each market. In other words, if a lot of people are buying shares in the ‘Yes’ option, those shares will increase in value, and ‘No’ shares will decrease.

This set up allows one to quickly look at a share price and know how likely that particular prediction is of coming true. Will Trump be impeached in his first term? Since shares are currently at 37¢, that means the market thinks there’s a 37 percent chance of that happening. I own 15 ‘Yes’ shares in this market. Shares have increased by 4¢ (or 4 percent) since I entered the market several months ago (from 33¢ to 37¢), so my initial investment of $4.90 has increased by 65¢ to $5.55 as of today.

If you can think of a question related to politics, there’s likely a market for it on PredictIt. Will North Korea compete in the 2018 Winter Olympics? Currently likely at 94 percent. Who will be the 2020 Democratic nominee for president? At the moment, Bernie Sanders and Kamala Harris hold the top spots. How many senate seats will the GOP hold after the mid-term elections? “49 or fewer” is the most likely answer according to investors on PredictIt.

What are the chances that events over the next year will change things up? There’s no market for that question on PredictIt, but I’d say it’s at least 100 percent. Of course, that’s exactly what makes the game so exciting!

I first entered the world of political gambling back in May. I’d become a bit of a news junkie during the 2016 election (Donald Trump is the news equivalent of heroin), and was looking for something to give meaning to the endless hours I spent following the machinations in Washington. Initially, I started with just $20. Later, I added another $25 for a total account investment of $45. I lost $8 on a couple of bets early on, and have spent the past six months trying to make up for the losses. This government shutdown drama put me back on top. According to my trade history, after more than 169 bets and minus any trading fees, I’m currently up by $9.96.

Okay, so the IRS is unlikely to come knocking on my door when I don’t report it on my taxes in April. Still, it feels good to be back in the black.

Eric Austin is a writer, technical consultant, and news junkie living in China, Maine. He can be contacted by email at ericwaustin@gmail.com.

A look at the top stories in The Town Line in 2017

by Eric W Austin

We’ve published a lot of stories over the past year, but which ones stood out from the crowd?

More than 20,000 people visited townline.org in 2017, with pages on the site viewed more than 70,000 times.

One of the advantages of a website over a traditional newspaper is the ability to track which stories are being read the most. Here I’d like to highlight those stories from The Town Line that attracted the most interest over the past year.

This article is based on statistics supplied by Google’s Analytics website tracking service, which monitors activity on townline.org.

Managing editor Roland Hallee’s November story, “China baseball player working to crack lineup at Newbury College,” about Dylan Presby’s impressive scholastic baseball career, claimed the top spot with more than 1,000 views. Presby, of China, went to Erskine Academy where he was named the Kennebec Journal’s Baseball Player of the Year, before being accepted by Newbury College in Brookline, Massachusetts. There he competes in the Division III New England Collegiate Conference (NECC). Roland writes, “But, that was high school. He has now moved on to a higher level of competition.” Read the entire article on townline.org.

Roland captured the second spot on this list as well, with his terrific (and prescient) column on the lack of birds in 2017. Appropriately titled, “Where have all the birds gone?”, Roland explores that very question and looks at some of the reasons behind the phenomenon. He writes, “The loss of bird populations in the Western Mountains of Maine includes three major causes.”

National Geographic magazine recently declared 2018 the “Year of the Bird,” putting a spotlight on the importance of our avian neighbors. I’m glad to see NatGeo has been reading The Town Line! Be sure to read Roland’s follow-ups as well, including “Update on birds” and “Bird disappearance is a phenomenon that exists nationwide.”

The 2017 Windsor Fair was a rousing success by all accounts and evidently people appreciated that we posted a ‘Schedule of Events’ for the festivities on townline.org. It was the third most visited page on the site this year. Don’t worry, we’ll be sure to do the same thing in 2018!

We began posting the China Police Log on townline.org back in 2016, but this past September’s police log was one of the first stories we posted to our new Facebook page, and it garnered so much interest it came in fourth in our list of 2017’s top pages, being viewed nearly 1,000 times. Big thanks to Tracey Frost, one of China’s part-time police officers who sends it to us every month!

The opioid crisis is often thought of as an epidemic of the big cities, but many rural areas of Central Maine have been affected deeply as well. My article exploring the issue and how it’s impacting our local communities, “Opiates in Central Maine: Not just a national issue,” was viewed close to 700 times. I write, “The solutions we need require not just a change in policy, but a shift in attitude as well.” This is the first article in a continuing series, so look for future installments from The Town Line in the months ahead.

Picking just five stories from 2017 is difficult in a year with so much great writing. Honorable mentions go to the multiple articles — primarily written by guest contributors from the local community — on the question of alewives in our local lakes and streams.

Emily Cates also has had a number of popular articles this year with highlights like: “Wrap your trees in tin foil – The Sure-fire way to protect your trees in wintertime…and puzzle your neighborhood!” Check out more of her great writing in “Garden Works” on townline.org.

The Town Line also launched its new Facebook page in 2017. As of today, more than 400 of you have followed us on Facebook! Like or follow us to see new local stories appear as updates in your Facebook feed. You can find us at facebook.com/townline.org.

This week on townline.org, we’ve set up a special page to highlight the best articles in The Town Line from the past year. Find easy links to the great stories mentioned above, as well as other popular stories from 2017. Look for the ‘Best of 2017’ graphic on the homepage at townline.org.

Eric Austin is a technical consultant and writer living in China, Maine.  He’s also the admin for townline.org.

<– Return to “The Best of The Town Line in 2017″

 

 

TECH TALK: Does the future spell the end of local news?

Eric’s Tech Talk

by Eric W. Austin
Writer and Technical Consultant

In August of 1981, an upstart cable TV station began broadcasting these slick new videos set to music. They called it “music television.”

The first music video to air on the new channel was the Buggles’ song “Video Killed the Radio Star.” It was supposed to herald the end of radio’s dominance and introduce the world to television as a new musical medium. Instead, nearly 40 years later, music can hardly be found on MTV and radio is still going strong.

The song’s theme, a lament about the old technology of radio being supplanted by the new technology of television, is playing out again with the Internet and traditional print journalism. Sadly, the Buggles’ song may turn out to be more prophetic this time around.

The newspaper industry is currently in a crisis, and even a little paper like The Town Line is feeling the hurt.

Advertising revenue, the primary source of income for newspapers the world over, has been steadily falling since the early 2000s. Between 2012 and 2016, newspaper ad revenues dropped by 21 percent, only slightly better than the previous five years where they dropped 24 percent. Overall, in the first 15 years of the new millennium, print advertising revenue fell to less than a third of what it was pre-Internet, from $60 billion to just $20 billion globally. And, unfortunately, that trend looks to continue in the years ahead.

On the positive side, circulation numbers are up for most newspapers, and public interest has never been higher, but income from subscriptions has not been enough to compensate for the lost advertising.

For small papers like The Town Line, which offers the paper for free and receives little income from subscriptions, this is an especially hard blow: more people are reading the paper, and there’s a great demand for content, but there is also less income from advertising to cover operating costs.

In the late ‘90s, The Town Line employed eight people: an editor, assistant editor, graphic artist, receptionist, bookkeeper and three sales people. Weekly issues often ran to 24 pages or more. Today that staff has been reduced to just three part-time employees, and the size of the paper has fallen to just 12 pages. There simply isn’t enough advertising to support a bigger paper.

People are more engaged than ever: they want to understand the world around them like never before. But as this business model, dependent on income from advertisers, continues to decay, without finding support from other sources, there is a real danger of losing the journalistic spirit that has played such an important role in our American experiment.

The reasons this is happening are fairly easy to explain. Businesses who once advertised exclusively in local papers have moved en masse to global platforms like Facebook and Google. These advertising platforms can offer the same targeted marketing once only possible with local publications, and they have the financial muscle to offer pricing and convenience that smaller publications cannot match.

This combination of local targeting and competitive pricing has caused a tidal wave of advertising to move from local papers to global corporations like Google, Facebook and Twitter instead. In the last decade, thousands of newspapers all across the nation have closed their doors. Often the first to succumb are small, local papers that have a limited geographic audience and fewer financial resources.

Like The Town Line.

There’s also been a transition in media coverage, from local issues to ones that have more of a national, or even global, audience. Websites are globally accessible, whereas traditional papers tend to have limited geographic range. Most online advertising pays on a ‘per-click’ basis, and a news story about China, Maine, will never get the same number of clicks as one about Washington, DC.

That smaller newspapers have been some of the hardest hit only makes this problem worse, as the remaining media companies tend toward huge conglomerates that are more concerned with covering national issues that have broad appeal, rather than local stories which may only be of interest to a small, localized audience.

This means that local issues are receiving less coverage, and as a result average Americans have fewer tools to make informed decisions about their communities.

When local journalism dies, what rises up to replace it? I think the answer is pretty clear: whichever website is willing to publish the most salacious stories generating the highest click-count – with little regard to proper sourcing or journalistic ethics.

Essentially, we’ve traded journalistic integrity for clickbait content.

Only a few weeks ago, the Bangor Daily News ran a story about a recent local election that may have been decided by a local ‘news’ site with no problem running rumor as news, and political partisans only too happy to propagate the dubious links through social media. Examples like this will only become more common in the years to come.

If we don’t support the traditional values of honesty, integrity and unbiased reporting that have been the bedrock of American journalism for two centuries, we may not like what rises up to replace it.

With advertising revenues hitting all-time lows nationwide, and looking to worsen in the years ahead, newspapers increasingly must rely on support from their readers to make ends meet. Since advertisers have abandoned them, it’s now up to ‘us’ to support local papers like The Town Line.

In this New Year, make a resolution to support your local newspaper. If you’re a business, help to reverse the trend by advertising in local publications. If you’re an individual, consider becoming a member of The Town Line. A small donation of $10 a month can make a world of difference. Best of all, since The Town Line is a 501(c)(3) nonprofit, private foundation, all donations are fully tax deductible!

To fulfill the American promise of an informed public, and fight the growing trend of clickbait sensationalism that has come to permeate much of the web, we must support local reporting more than ever. The time to act is now, before journalism loses another warrior in the fight for free expression.

Don’t let our generation be the one in which local journalism dies!

Eric Austin lives in China, Maine and writes about technology and community issues. He can be reached at ericwaustin@gmail.com.

TECH TALK: Net Neutrality goes nuclear

ERIC’S TECH TALK

by Eric Austin
Computer Technical Advisor

Do you like your cable TV service? I hope so, because your internet service is about to get a whole lot more like it.

On Thursday last week, the Federal Communications Commission (FCC), headed up by Trump appointee and former Verizon employee Ajit Pai, voted 3-2, along party lines, to repeal Obama-era rules that prevented internet providers from favoring some internet traffic over others.

You know how the cable company always puts the one channel you really want in a higher tier, forcing you to pay for the more expensive package even though you don’t like any of the other channels?

That’s right. Nickel-and-diming is coming to an internet service near you!

What does this really mean for you? I’m so glad you asked, but I’m afraid my answer will not make you happy.

It means that huge telecommunication companies like Comcast and TimeWarner now have the power to determine which internet services you have access to.

If you have a niche interest you pursue on the internet, you’re likely to be affected. Those websites with smaller audiences will have their bandwidth throttled unless you, the consumer, begin paying your Internet Service Provider (ISP) an extra fee.

That means you, Miniature Train Collector! That means you, Bass Fisherman! That means you, Foot-Fetish Fanatic!

It means pay-to-play is coming to the internet. When ISPs are allowed to favor some traffic over others, the Almighty Dollar will determine the winners and losers.

It means smaller newspapers like The Town Line, already suffering in a climate of falling ad revenue and competition from mega-sites like Buzzfeed and Facebook, will be forced to struggle even harder to find an audience.

Remember when chain super-stores like WalMart and Lowe’s forced out all the mom and pop stores? Remember when Starbucks and Subway took over Main Street?

That’s about to happen to the internet.

This move puts more control in the hands of mega-corporations – and in the hands of the men who own them. Do you want to choose your ISP based on where you fall on the political divide? What if Rupert Murdoch, owner of Fox News, bought Fairpoint or Spectrum? Which viewpoints do you think he would be likely to favor? Which websites would see their traffic throttled? What about George Soros, the billionaire liberal activist? No matter which side of the political divide you come down on, this is bad news for America.

In 2005, a little website called YouTube launched. It was competing against an internet mega-giant called Google Video. Two years later Google bought the website for $1.65 billion. Today, YouTube is one of the most popular websites on the internet.

That won’t happen in the future. Under the new rules, Google can simply use its greater capital to bribe ISPs to squash competitor traffic. YouTube would have died on the vine. In fact, that’s exactly what’s likely to happen to YouTube’s competitors now. Oh, the irony!

Twitter, YouTube, Facebook — none of these sites would be successes today without the level-playing field the internet has enjoyed during its first two decades of life.

So this is now the future of the internet. The barrier to innovation and success just became greater for the little guy. Is that really what the web needs?

These are dangerous days we live in, with freedom and democracy apparently assailed from all sides. The internet has been a beacon of hope in these troubled times, giving voice to the voiceless and leveling the playing field in a game that increasingly favors the powerful.

This decision by the FCC under Trump is a huge boon to the power of mega-corporations, telecommunications companies, and established monopolies, but it’s a flaming arrow to the heart of everyday, average Americans and future entrepreneurs. America will be the poorer because of it.

If there’s anything left of the revolutionary spirit that founded America, it lives on in the rebellious noise of the World Wide Web. Let’s not squash it in favor of giving more money and control to big corporations. America has had enough of that. Leave the internet alone!

Eric Austin is a writer and technical consultant living in China, Maine. He writes about technical and community issues and can be contacted at ericwaustin@gmail.com.

Further reading:

TECH TALK: Are you human or robot? The surprising history of CAPTCHAs

ERIC’S TECH TALK

by Eric W. Austin

We’re all familiar with it. Try to log into your favorite website, and you’re likely to be presented with a question: Are you human or a robot? Then you might be asked to translate a bit of garbled text or pick from a set of presented images. What’s this all about?

There’s an arms race going on between website owners and internet spam bots. Spam bots want to log into your site like a regular human, and then leave advertising spam comments on all your pages. Website admins naturally want to stop this from happening, as we have enough ordinary humans leaving pointless comments already.

Although several teams have claimed ownership of inventing the technique, the term ‘CAPTCHA’ was first coined by a group of engineers at Carnegie Mellon University in 2001. They were looking for a way to allow websites to distinguish between live humans and the growing multitude of spam bots pretending to be human. They came up with the idea of showing a user distorted images of garbled words that could be understood by a real person but would confound a computer. It was from this idea that the ubiquitous CAPTCHA emerged.

CAPTCHA is an acronym that stands for ‘Completely Automated Public Turing test to tell Computers and Humans Apart.’

Around this same time, The New York Times was in the process of digitizing their back issues. They were employing a fairly new computer technology called Optical Character Recognition (OCR), which is the process of scanning a page of type and turning it into searchable text. Prior to this technology, a scanned page of text was simply an image and not searchable or capable of being cataloged based on its content.

Old newsprint can be difficult to read for computers, especially since the back catalog of The New York Times stretches back more than 100 years. If the ink has smeared, faded or is otherwise obscured, a computer could fail to correctly interpret the text.

The New York Times got the brilliant idea of using these difficult words as CAPTCHA images, utilizing the power of internet users to read words a computer had failed to recognize. The project was reinvented as ‘reCAPTCHA.’

In 2009, Google bought the company responsible for reCAPTCHA and began using it to help digitize old books for their Google Books project. Whenever their computers run into trouble interpreting a bit of text, a scan of those words is uploaded to the reCAPTCHA servers and millions of internet users share in the work of decoding old books for Google’s online database.

I bet you didn’t realize you’re working for Google every time you solve one of those garbled word puzzles!

Of course, artificial intelligence and OCR technology has improved a lot in the years since. Now you are more likely to be asked to choose those images that feature street signs, rather than to solve a bit of distorted text. In this way, Google is using internet users to improve its artificial intelligence image recognition.

Soon computers will be smart enough to solve these picture challenges as well. In fact, the latest version of CAPTCHA barely requires any input from the internet user at all. If you have come to a webpage and been asked to check a box verifying that, “I’m not a robot,” and wondered how this can possibly filter out spam bots, you’re not alone. There’s actually a lot more going on behind that simple checkbox.

Invented by Google, and called “No CAPTCHA reCAPTCHA,” the new system employs an invisible algorithm behind the scenes that executes when you check the box. This algorithm analyzes your recent online behavior in order to determine if you are acting like a human or a bot. If it determines you might be a bot, you’ll get the familiar pop-up, asking you to choose from a series of images in order to verify your humanity.

This internet arms race is a competition between artificial intelligence’s efforts to pass as human and a website admin’s attempt to identify them. The CAPTCHA will continue to evolve as the artificial intelligence of spam bots increases to keep pace.

It’s an arms race we’re bound to lose in the end. But until then, the next time you’re forced to solve a garbled word puzzle, perhaps it will help ease the tedium to remember you’re helping preserve the world’s literary past every time you do!

TECH TALK: Life & Death of the Microchip

Examples of early vacuum tubes. (Image credit: Wikimedia Commons)

ERIC’S TECH TALK

by Eric Austin
Computer Technical Advisor

The pace of technological advancement has a speed limit and we’re about to slam right into it.

The first electronic, programmable, digital computer was designed in 1944 by British telephone engineer Tommy Flowers, while working in London at the Post Office Research Station. Named the Colossus, it was built as part of the Allies’ wartime code-breaking efforts.

The Colossus didn’t get its name from being easy to carry around. Computers communicate using binary code, with each 0 or 1 represented by a switch that is either open or closed, on or off. In 1944, before the invention of the silicon chip that powers most computers today, this was accomplished using vacuum-tube technology. A vacuum tube is a small, vacuum-sealed, glass chamber which serves as a switch to control the flow of electrons through it. Looking much like a complicated light-bulb, vacuum tubes were difficult to manufacture, bulky and highly fragile.

Engineers were immediately presented with a major problem. The more switches a computer has, the faster it is and the larger the calculations it can handle. But each switch is an individual glass tube, and each must be wired to every other switch on the switchboard. This means that a computer with 2,400 switches, like the Colossus, would need 2,400 individual wires connecting each switch to every other, or a total of almost six million wires. As additional switches are added, the complexity of the connections between components increases exponentially.

This became known as the ‘tyranny of numbers’ problem, and because of it, for the first two decades after the Colossus was introduced, it looked as though computer technology would forever be out of reach of the average consumer.

Then two engineers, working separately in California and Texas, discovered a solution. In 1959, Jack Kilby, working at Texas Instruments, submitted his design for an integrated circuit to the US patent office. A few months later, Robert Noyce, founder of the influential Fairchild Semiconductor research center in Palo Alto, California, submitted his own patent. Although they each approached the problem differently, it was the combination of their ideas that resulted in the microchip we’re familiar with today.

The advantages of this new idea, to print microscopic transistors on a wafer of semi-conducting silicon, were immediately obvious. It was cheap, could be mass produced, and most importantly, it’s performance was scalable: as our miniaturization technology improved, we were able to pack more transistors (switches) onto the same chip of silicon. A chip with a higher number of transistors resulted in a more powerful computer, which allowed us to further refine our fabrication process. This self-fed cycle of progress is what has fueled our technological advancements for the last 60 years.

Gordon Moore, who, along with Robert Noyce, later founded the microchip company Intel, was the first to understand this predictable escalation in computer speed and performance. In a paper he published in 1965, Moore observed that the number of components we could print on an integrated circuit was doubling every year. Ten years later the pace had slowed somewhat and he revised his estimate to doubling every two years. Nicknamed “Moore’s Law,” it’s a prediction that has remained relatively accurate ever since.

This is why every new iphone is faster, smaller, and more powerful than the one from the year before. In 1944, the Colossus was built with 2,400 binary vacuum tubes. Today the chip in your smart phone possesses something in the neighborhood of seven billion transistors. That’s the power of the exponential growth we’ve experienced for more than half a century.

But this trend of rapid progress is about to come to an end. In order to squeeze seven billion components onto a tiny wafer of silicone, we’ve had to make everything really small. Like, incomprehensibly small. Components are only a few nanometers wide, with less than a dozen nanometers between them. For some comparison, a sheet of paper is about 100,000 nanometers thick. We are designing components so small that they will soon be only a few atoms across. At that point electrons begin to bleed from one transistor into another, because of a quantum effect called ‘quantum tunneling,’ and a switch that can’t be reliably turned off is no switch at all.

Experts differ on how soon the average consumer will begin to feel the effects of this limitation, but most predict we have less than a decade to find a solution or the technological progress we’ve been experiencing will grind to a stop.

What technology is likely to replace the silicon chip? That is exactly the question companies like IBM, Intel, and even NASA are racing to answer.

IBM is working on a project that aims to replace silicon transistors with ones made of carbon nanotubes. The change in materials would allow manufacturers to reduce the space between transistors from 14 nanometers to just three, allowing us to cram even more transistors onto a single chip before running into the electron-bleed effect we are hitting with silicon.

Another idea with enormous potential, the quantum computer, was first proposed back in 1968, but has only recently become a reality. Whereas the binary nature of our current digital technology only allows for a switch to be in two distinct positions, on or off, the status of switches in a quantum computer are determined by the superpositional states of a quantum particle, which, because of the weirdness of quantum mechanics, can be in the positions of on, off or both – simultaneously! The information contained in one quantum switch is called a ‘qubit,’ as opposed to the binary ‘bit’ of today’s digital computers.

At their Quantum Artificial Intelligence Laboratory (QuAIL) in Silicon Valley, NASA, in partnership with Google Research and a coalition of 105 colleges and universities, has built the D-Wave 2X, a second-generation, 1,097-qubit quantum computer. Although it’s difficult to do a direct qubit-to-bit comparison because they are so fundamentally different, Google Research has released some data on its performance. They timed how long it takes the D-Wave 2X to do certain high-level calculations and compared the timings with those of a modern, silicon-based computer doing the same calculations. According to their published results, the D-Wave 2X is 100 million times faster than the computer on which you are currently reading this.

Whatever technology eventually replaces the silicon chip, it will be orders of magnitude better, faster and more powerful than what we have today, and it will have an unimaginable impact on the fields of computing, space exploration and artificial intelligence – not to mention the ways in which it will transform our ordinary, everyday lives.

Welcome to the beginning of the computer age, all over again.

TECH TALK: Bug hunting in the late 20th century

(image credit: XDanielx – public domain via Wikimedia Commons)

ERIC’S TECH TALK

by Eric W. Austin
Computer Technical Advisor

The year is 1998. As the century teeters on the edge of a new millennium, no one can stop talking about Monica Lewinsky’s dress. September 11, 2001, is still a long ways off, and the buzz in the tech bubble is all about the Y2K bug.

I was living in California at the time, and one of my first projects, in a burgeoning technical career, was working on this turn of the century technical issue. Impacting the financial sector especially hard, which depends upon highly accurate transactional data, the Y2K bug forced many companies to put together whole departments whose only responsibility was to deal with it.

I joined a team of about 80 people as a data analyst, working directly with the team leader to aggregate data on the progress of the project for the vice president of the department.

Time Magazine cover from January 1999

Born out of a combination of the memory constraints of early computers in the 1960s and a lack of foresight, the Y2K bug was sending companies into a panic by 1998.

In the last decade, we’ve become spoiled by the easy availability of data storage. Today, we have flash drives that store gigabytes of data and can fit in our pocket, but in the early days of computing data-storage was expensive, requiring huge server rooms with 24-hour temperature control. Programmers developed a number of tricks to compensate. Shaving off even a couple of bytes from a data record could mean the difference between a productive program and a crashing catastrophe. One of the ways they did this was by storing dates using only six digits – 11/09/17. Dropping the first two digits of the year from hundreds of millions of records meant significant savings in expensive data-storage.

This convention was widespread throughout the industry. It was hard-coded into programs, assumed in calculations, and stored in databases. Everything had to be changed. The goal of our team was to identify every instance where a two-digit year was used, in any application, query or table, and change it to use a four-digit year instead. This was more complicated than it sounds, as many programs and tables had interdependencies with other programs and tables, and all these relationships had to be identified first, before changes could be made. Countrywide Financial, the company that hired me, was founded in 1969 and had about 7,000 employees in 1998. We had 30 years of legacy code that had to be examined line by line, tested and then put back into production without breaking any other functionality. It was an excruciating process.

It was such a colossal project there weren’t enough skilled American workers to complete the task in time, so companies reached outside the U.S. for talent. About 90 percent of our team was from India, sponsored on a special H-1B visa program expanded by President Bill Clinton in October of ’98, specifically to aid companies in finding enough skilled labor to combat the Y2K bug.

For a kid raised in rural New England, this was quite the culture shock, but I found it fascinating. The Indians spoke excellent English, although for most of them Hindi was their first language, and they were happy to answer my many questions about Indian culture.

I immediately became good friends with my cube-mate, an affable young Indian man and one of the team leaders. On my first day, he told me excitedly about being recently married to a woman selected by his parents while he had been working here in America. He laughed at my shock after explaining he had spoken with his bride only once – by telephone – before the wedding.

About a month into my contract, my new friend invited me to share dinner with him and his family. I was excited for my first experience of true Indian home-cooking.

By and large, Californians aren’t the most sociable neighbors. Maybe it’s all that time stuck in traffic, but it’s not uncommon to live in an apartment for years and never learn the name of the person across the hall. Not so in Srini’s complex!

Srini lived with a number of other Indian men and their families, also employed by Countrywide, in a small apartment complex in Simi Valley, about 20 minutes down the Ronald Reagan Freeway from where I lived in Chatsworth, on the northwest side of Los Angeles County.

I arrived in my best pressed shirt, and found that dinner was a multi-family affair. At least a dozen other people, from other Indian families living in nearby apartments – men, women, and children – gathered in my friend’s tiny living room.

The men lounged on the couches and chairs, crowded around the small television, while the women toiled in the kitchen, gossiping in Hindi and filling the tiny apartment with the smells of curry and freshly baking bread.

At dinner, I was surprised to find that only men were allowed to sit around the table. Although they had just spent the past two hours preparing the meal, the women sat demurely in chairs placed against the walls of the kitchen. When I offered to make room for them, Srini politely told me they would eat later.

I looked in vain for a fork or a spoon, but there were no utensils. Instead, everyone ate with their fingers. Food was scooped up with a thick, flatbread called Chapati. Everything was delicious.

Full of curry, flatbread, and perhaps a bit too much Indian beer, Srini and his wife walked me back to my car after dinner. Unfortunately, when Srini’s wife gave me a slight bow of farewell, a tad too eager to demonstrate my cultural savoir-faire, I mistook her bow for a French la bise instead. Bumped foreheads and much furious blushing resulted. Later, I had to apologize to Srini for attempting to kiss his wife. He thought it was hilarious.

Countrywide survived the Y2K bug, although the company helped bring down the economy a decade later. Srini moved on to other projects within the company, as did I. The apocalypticists would have to wait until 2012 to predict the end of the world again, but the problems – and opportunities – created by technology have only grown in the last 17 years: driverless cars, Big Data, and renegade A.I. – to deal with these problems, and to exploit the opportunities they open up for us, it will take a concerted effort from the brightest minds on the planet.

Thankfully, they’re already working on it.

Here at Tech Talk we take a look at the most interesting – and beguiling – issues in technology today. Eric can be reached at ericwaustin@gmail.com, and don’t forget to check out previous issues of the paper online at townline.org.

TECH TALK: Virtual Money – The next evolution in commerce

ERIC’S TECH TALK

by Eric Austin
Technical Consultant

Commerce began simply enough. When roving bands of hardly-human migratory hunters met in the Neolithic wilderness, it was only natural that they compare resources and exchange goods. The first trades were simple barters: two beaver skins and a mammoth tusk for a dozen arrowheads and a couple of wolf pelts.

As people settled down and built cities, there was a need to standardize commerce. In ancient Babylon, one of our earliest civilizations, barley served as a standard of measurement. The smallest monetary unit, the ‘shekel,’ was equal to 180 grains of barley.

The first coins appeared not long after. Initially, a coin was worth the value of the metal it was minted from, but eventually its intrinsic value separated from its representational value. When the state watered down the alloy of a gold coin with baser metals, such as tin or copper, they invented inflation. With the introduction of paper money, first in China in the 7th century CE and later in medieval Europe, the idea of intrinsic worth was done away with entirely for a representational value dictated by the state.

In the 19th and 20th centuries, corporations took over from the state as the main drivers in the evolution of commerce. Then, in the 1960s, the foundations of e-commerce were laid down with the establishment of the Electronic Data Interchange (EDI). The EDI defines the standards for transactions between two electronic devices on a network. It was initially developed out of Cold War military strategic thinking, specifically the need for logistical coordination of transported goods during the 1948 Berlin Airlift.

Worry about the security of such communication kept it from being used for financial transactions until 1994, when Netscape, an early browser technology company and the foundation of browsers such as Firefox, invented Secure Socket Layers (SSL) encryption, a cryptographic protocol that provides communications security for computers over a network. After this breakthrough, various third parties began providing credit card processing services. A short time later, Verisign developed the first unique digital identifier, or SSL certificate, to verify merchants. With that our current system for online commerce was complete.

So why is Internet security still such a constant worry? Part of the problem is embedded in the structure of the Internet itself. The Internet is first and foremost designed to facilitate communication, and its openness and decentralized structure is paradoxical to the financial sector, which depends on the surety of a centralized authority overseeing all transactions. Most of our existing security issues on the internet are a consequence of these diametrically opposed philosophies.

Cryptocurrencies are the result of thinking about money with an Internet mindset. Classified as a virtual currency, cryptocurrencies such as Bitcoin aim to solve a number of problems present in our current online transactional system by embracing the decentralized structure of the Internet and by lifting some novel concepts from cryptography, the study of encryption and code-breaking.

Introduced in 2009, Bitcoin was the world’s first virtual currency. Bitcoin tackles the security issues of our current system by decentralizing its transaction data. Bitcoin’s public ledger is called a ‘blockchain,’ with each block in the chain representing a financial transaction. The database is designed to prevent data alteration by building references to other transactions into each record. To alter one record, a hacker would need to alter every other record that references it in order to avoid detection.

And since the database is maintained by every computer participating in that chain of transactions, any data altered on one computer would be immediately detected by every other computer on the network. This ‘decentralized data’ concept eliminates the big weakness in our current system. Today, the control of data is concentrated in a few centralized institutions, and if the security of any one of those institutions is penetrated, the entire system becomes compromised.

Beyond creating a secure financial transaction system for the World Wide Web, another goal of cryptocurrencies is to reduce or even eliminate financial fees by removing the need for a middleman overseeing the transaction. Since no centralized banking authority is necessary to track transactions, many of the costs associated with the involvement of banking institutions disappear. This has made Bitcoin the preferred currency for moving money around the world, as it can be done with a minimum of bureaucratic fees. Western Union currently charges 7-8 percent transfer cost per $100. For migrant workers sending money home to their families, that’s a big hit.

With no personal, identifying information recorded as part of a Bitcoin transaction, it provides a level of anonymity not possible with our current system. However, as pointed out by MIT researchers, this anonymity only extends as far as the merchant accepting the transaction, who may still tag transaction IDs with personal customer info.

The anonymous nature of Bitcoin transactions is a boon to the security of consumers, but it presents a real problem for law enforcement. Bitcoin has become the favored currency for criminal activity. Kidnappers frequently insist on payment in Bitcoin. The WannaCry virus that attacked 200,000 computers in 150 countries earlier this year required victims to pay in Bitcoin.

The value of Bitcoin has steadily increased since it was introduced almost 10 years ago. In January 2014, one bitcoin was worth $869.61. As I write this in October 2017, that same bitcoin is valued at $5,521.32, an increase of more than 500 percent in just three years. With approximately 16 million bitcoins in circulation, the total current value of the Bitcoin market is almost $92 billion. The smallest unit of Bitcoin is called a ‘satoshi,’ worth 1 millionth of a bitcoin.

WannaCry isn’t the only cyberthreat to leverage Bitcoin either. Since Bitcoin is designed to reward computers which keep its database updated with new bitcoins, some malicious programmers have created viruses that hijack your computer in order to force it to mine bitcoins. Most people are not even aware this has happened. There may simply be a process running in the background, slowing down your PC, and quietly depositing earned bitcoins into a hacker’s digital wallet.

The benefits to be gained by this revolution in commerce – security, anonymity, and the elimination of the need for a financial middleman – are great, but the risks are not to be dismissed either. Even as the anonymous nature of cryptocurrencies provide the consumer with greater security and lower costs, it creates a haven for criminals and makes it more difficult for law enforcement to track cybercrime.

Whether Bitcoin sticks around or disappears to be replaced with something else, the philosophy and technology behind it will transform the financial sector in the decades to come. Our current internet commerce model is a slapdash attempt to stick an old system onto the new digital world of the Internet and cannot last. The road to a new financial reality is bound to be a rocky one, as banking institutions are not likely to accept the changes – and the recession of their influence – easily. But, as shown by the recent Equifax hack, which exposed the personal information of 143 million Americans, maybe trusting our financial security to a few, centralized institutions isn’t such a great idea. And maybe cryptocurrencies are part of the answer.

TECH TALK: A.I. on the Road: Who’s Driving?

ERIC’S TECH TALK

by Eric Austin
Computer Technical Advisor

In an automobile accident – in the moments before your car impacts an obstacle, in the seconds before glass shatters and steel crumples — we usually don’t have time to think, and are often haunted by self-recriminations in the days and weeks afterward. Why didn’t I turn? Why didn’t I hit the brakes sooner? Why’d I bother even getting out of bed this morning?

Driverless cars aim to solve this problem by replacing the human brain with a silicon chip. Computers think faster than we do and they are never flustered — unless that spinning beach ball is a digital sign of embarrassment? — but the move to put control of an automobile in the hands of a computer brings with it a new set of moral dilemmas.

Unlike your personal computer, a driverless car is a thinking machine. It must be capable of making moment-to-moment decisions that could have real life-or-death consequences.

Consider a simple moral quandary. Here’s the setup: It’s summer and you are driving down Lakeview Drive, headed toward the south end of China Lake. You pass China Elementary School. School is out of session so you don’t slow down, but you’ve forgotten about the Friend’s Camp, just beyond the curve, where there are often groups of children crossing the road, on their way to the lake on the other side. You round the curve and there they are, a whole gang of them, dressed in swim suits and clutching beach towels. You hit the brakes and are shocked when they don’t respond. You now have seven-tenths of a second to decide: do you drive straight ahead and strike the crossing kids or avoid them and dump your car in the ditch?

Not a difficult decision, you might think. Most of us would prefer a filthy fender to a bloody bumper. But what if instead of a ditch, it was a tree, and the collision killed everyone in the car? Do you still swerve to avoid the kids in the crosswalk and embrace an evergreen instead? What if your own children were in the car with you? Would you make the same decision?

If this little thought exercise made you queasy, that’s okay. Imagine how the programmers building the artificial intelligence (A.I.) that dictates the behavior of driverless cars must feel.

There may be a million to one chance of this happening to you, but with 253 million cars on the road, it will happen to someone. And in the near future, that someone might be a driverless car. Will the car’s A.I. remember where kids often cross? How will it choose one life over another in a zero-sum game?

When we are thrust into these life-or-death situations, we often don’t have time to think and react mostly by instinct. A driverless car has no instinct, but can process millions of decisions a second. It faces the contradictory expectations of being both predictable and capable of reacting to the unexpected.

That is why driverless cars were not possible before recent advances in artificial intelligence and computing power. Rather than traditionally linear, conditional-programming techniques of the past (If This Then That), driverless cars employ a new field of computer science called “machine learning,” which utilizes more human-like functions, such as pattern-recognition, and can update its own code based on past results in order to attain better accuracy in the future. Basically, the developers give the A.I. a series of tests, and based on its success or failure in those tests, the A.I. updates its algorithms to improve its success rate.

That is what is happening right now in San Francisco, Boston, and soon New York. Las Vegas is testing a driverless bus system. These are opportunities for the driverless A.I. to encounter real-life situations and learn from those encounters before the technology is rolled out to the average consumer.

The only way we learn is from our mistakes. That is true of driverless cars, too, and they have made a few. There have been hardware and software failures and unforeseen errors. In February 2016, a Google driverless car experienced its first crash, turning into the path of a passing bus. In June 2016, a man in a self-driving Tesla was killed when the car tried to drive at full speed under a white tractor trailer crossing in front of it. The white trailer against the smoky backdrop of a cloudy sky fooled the car. The occupant was watching Harry Potter on the car’s television screen and never saw it coming.

Mistakes are ubiquitous in our lives; “human error” has become cliché. But will we be as forgiving of such mistakes when they are made by a machine? Life is an endless series of unfortunate coincidences, and no one can perfectly predict every situation. But, lest I sound like Dustin Hoffman in the film Rain Man, quoting plane crash statistics, let me say I am certain studies will eventually show autonomous vehicles reduce overall accident rates.

Also to be considered are the legal aspects. If a driverless car strikes a pedestrian, who is responsible? The owner of the driverless car? The car manufacturer? The developer of the artificial intelligence governing the car’s behavior? The people responsible for testing it?

We are in the century of A.I., and its first big win will be the self-driving car. The coming decade will be an interesting one to watch.

Get ready to have a new relationship with your automobile.

Eric can be emailed at ericwaustin@gmail.com.