TECH TALK: Net Neutrality goes nuclear

ERIC’S TECH TALK

by Eric Austin
Computer Technical Advisor

Do you like your cable TV service? I hope so, because your internet service is about to get a whole lot more like it.

On Thursday last week, the Federal Communications Commission (FCC), headed up by Trump appointee and former Verizon employee Ajit Pai, voted 3-2, along party lines, to repeal Obama-era rules that prevented internet providers from favoring some internet traffic over others.

You know how the cable company always puts the one channel you really want in a higher tier, forcing you to pay for the more expensive package even though you don’t like any of the other channels?

That’s right. Nickel-and-diming is coming to an internet service near you!

What does this really mean for you? I’m so glad you asked, but I’m afraid my answer will not make you happy.

It means that huge telecommunication companies like Comcast and TimeWarner now have the power to determine which internet services you have access to.

If you have a niche interest you pursue on the internet, you’re likely to be affected. Those websites with smaller audiences will have their bandwidth throttled unless you, the consumer, begin paying your Internet Service Provider (ISP) an extra fee.

That means you, Miniature Train Collector! That means you, Bass Fisherman! That means you, Foot-Fetish Fanatic!

It means pay-to-play is coming to the internet. When ISPs are allowed to favor some traffic over others, the Almighty Dollar will determine the winners and losers.

It means smaller newspapers like The Town Line, already suffering in a climate of falling ad revenue and competition from mega-sites like Buzzfeed and Facebook, will be forced to struggle even harder to find an audience.

Remember when chain super-stores like WalMart and Lowe’s forced out all the mom and pop stores? Remember when Starbucks and Subway took over Main Street?

That’s about to happen to the internet.

This move puts more control in the hands of mega-corporations – and in the hands of the men who own them. Do you want to choose your ISP based on where you fall on the political divide? What if Rupert Murdoch, owner of Fox News, bought Fairpoint or Spectrum? Which viewpoints do you think he would be likely to favor? Which websites would see their traffic throttled? What about George Soros, the billionaire liberal activist? No matter which side of the political divide you come down on, this is bad news for America.

In 2005, a little website called YouTube launched. It was competing against an internet mega-giant called Google Video. Two years later Google bought the website for $1.65 billion. Today, YouTube is one of the most popular websites on the internet.

That won’t happen in the future. Under the new rules, Google can simply use its greater capital to bribe ISPs to squash competitor traffic. YouTube would have died on the vine. In fact, that’s exactly what’s likely to happen to YouTube’s competitors now. Oh, the irony!

Twitter, YouTube, Facebook — none of these sites would be successes today without the level-playing field the internet has enjoyed during its first two decades of life.

So this is now the future of the internet. The barrier to innovation and success just became greater for the little guy. Is that really what the web needs?

These are dangerous days we live in, with freedom and democracy apparently assailed from all sides. The internet has been a beacon of hope in these troubled times, giving voice to the voiceless and leveling the playing field in a game that increasingly favors the powerful.

This decision by the FCC under Trump is a huge boon to the power of mega-corporations, telecommunications companies, and established monopolies, but it’s a flaming arrow to the heart of everyday, average Americans and future entrepreneurs. America will be the poorer because of it.

If there’s anything left of the revolutionary spirit that founded America, it lives on in the rebellious noise of the World Wide Web. Let’s not squash it in favor of giving more money and control to big corporations. America has had enough of that. Leave the internet alone!

Eric Austin is a writer and technical consultant living in China, Maine. He writes about technical and community issues and can be contacted at ericwaustin@gmail.com.

Further reading:

TECH TALK: Are you human or robot? The surprising history of CAPTCHAs

ERIC’S TECH TALK

by Eric W. Austin

We’re all familiar with it. Try to log into your favorite website, and you’re likely to be presented with a question: Are you human or a robot? Then you might be asked to translate a bit of garbled text or pick from a set of presented images. What’s this all about?

There’s an arms race going on between website owners and internet spam bots. Spam bots want to log into your site like a regular human, and then leave advertising spam comments on all your pages. Website admins naturally want to stop this from happening, as we have enough ordinary humans leaving pointless comments already.

Although several teams have claimed ownership of inventing the technique, the term ‘CAPTCHA’ was first coined by a group of engineers at Carnegie Mellon University in 2001. They were looking for a way to allow websites to distinguish between live humans and the growing multitude of spam bots pretending to be human. They came up with the idea of showing a user distorted images of garbled words that could be understood by a real person but would confound a computer. It was from this idea that the ubiquitous CAPTCHA emerged.

CAPTCHA is an acronym that stands for ‘Completely Automated Public Turing test to tell Computers and Humans Apart.’

Around this same time, The New York Times was in the process of digitizing their back issues. They were employing a fairly new computer technology called Optical Character Recognition (OCR), which is the process of scanning a page of type and turning it into searchable text. Prior to this technology, a scanned page of text was simply an image and not searchable or capable of being cataloged based on its content.

Old newsprint can be difficult to read for computers, especially since the back catalog of The New York Times stretches back more than 100 years. If the ink has smeared, faded or is otherwise obscured, a computer could fail to correctly interpret the text.

The New York Times got the brilliant idea of using these difficult words as CAPTCHA images, utilizing the power of internet users to read words a computer had failed to recognize. The project was reinvented as ‘reCAPTCHA.’

In 2009, Google bought the company responsible for reCAPTCHA and began using it to help digitize old books for their Google Books project. Whenever their computers run into trouble interpreting a bit of text, a scan of those words is uploaded to the reCAPTCHA servers and millions of internet users share in the work of decoding old books for Google’s online database.

I bet you didn’t realize you’re working for Google every time you solve one of those garbled word puzzles!

Of course, artificial intelligence and OCR technology has improved a lot in the years since. Now you are more likely to be asked to choose those images that feature street signs, rather than to solve a bit of distorted text. In this way, Google is using internet users to improve its artificial intelligence image recognition.

Soon computers will be smart enough to solve these picture challenges as well. In fact, the latest version of CAPTCHA barely requires any input from the internet user at all. If you have come to a webpage and been asked to check a box verifying that, “I’m not a robot,” and wondered how this can possibly filter out spam bots, you’re not alone. There’s actually a lot more going on behind that simple checkbox.

Invented by Google, and called “No CAPTCHA reCAPTCHA,” the new system employs an invisible algorithm behind the scenes that executes when you check the box. This algorithm analyzes your recent online behavior in order to determine if you are acting like a human or a bot. If it determines you might be a bot, you’ll get the familiar pop-up, asking you to choose from a series of images in order to verify your humanity.

This internet arms race is a competition between artificial intelligence’s efforts to pass as human and a website admin’s attempt to identify them. The CAPTCHA will continue to evolve as the artificial intelligence of spam bots increases to keep pace.

It’s an arms race we’re bound to lose in the end. But until then, the next time you’re forced to solve a garbled word puzzle, perhaps it will help ease the tedium to remember you’re helping preserve the world’s literary past every time you do!

TECH TALK: Life & Death of the Microchip

Examples of early vacuum tubes. (Image credit: Wikimedia Commons)

ERIC’S TECH TALK

by Eric Austin
Computer Technical Advisor

The pace of technological advancement has a speed limit and we’re about to slam right into it.

The first electronic, programmable, digital computer was designed in 1944 by British telephone engineer Tommy Flowers, while working in London at the Post Office Research Station. Named the Colossus, it was built as part of the Allies’ wartime code-breaking efforts.

The Colossus didn’t get its name from being easy to carry around. Computers communicate using binary code, with each 0 or 1 represented by a switch that is either open or closed, on or off. In 1944, before the invention of the silicon chip that powers most computers today, this was accomplished using vacuum-tube technology. A vacuum tube is a small, vacuum-sealed, glass chamber which serves as a switch to control the flow of electrons through it. Looking much like a complicated light-bulb, vacuum tubes were difficult to manufacture, bulky and highly fragile.

Engineers were immediately presented with a major problem. The more switches a computer has, the faster it is and the larger the calculations it can handle. But each switch is an individual glass tube, and each must be wired to every other switch on the switchboard. This means that a computer with 2,400 switches, like the Colossus, would need 2,400 individual wires connecting each switch to every other, or a total of almost six million wires. As additional switches are added, the complexity of the connections between components increases exponentially.

This became known as the ‘tyranny of numbers’ problem, and because of it, for the first two decades after the Colossus was introduced, it looked as though computer technology would forever be out of reach of the average consumer.

Then two engineers, working separately in California and Texas, discovered a solution. In 1959, Jack Kilby, working at Texas Instruments, submitted his design for an integrated circuit to the US patent office. A few months later, Robert Noyce, founder of the influential Fairchild Semiconductor research center in Palo Alto, California, submitted his own patent. Although they each approached the problem differently, it was the combination of their ideas that resulted in the microchip we’re familiar with today.

The advantages of this new idea, to print microscopic transistors on a wafer of semi-conducting silicon, were immediately obvious. It was cheap, could be mass produced, and most importantly, it’s performance was scalable: as our miniaturization technology improved, we were able to pack more transistors (switches) onto the same chip of silicon. A chip with a higher number of transistors resulted in a more powerful computer, which allowed us to further refine our fabrication process. This self-fed cycle of progress is what has fueled our technological advancements for the last 60 years.

Gordon Moore, who, along with Robert Noyce, later founded the microchip company Intel, was the first to understand this predictable escalation in computer speed and performance. In a paper he published in 1965, Moore observed that the number of components we could print on an integrated circuit was doubling every year. Ten years later the pace had slowed somewhat and he revised his estimate to doubling every two years. Nicknamed “Moore’s Law,” it’s a prediction that has remained relatively accurate ever since.

This is why every new iphone is faster, smaller, and more powerful than the one from the year before. In 1944, the Colossus was built with 2,400 binary vacuum tubes. Today the chip in your smart phone possesses something in the neighborhood of seven billion transistors. That’s the power of the exponential growth we’ve experienced for more than half a century.

But this trend of rapid progress is about to come to an end. In order to squeeze seven billion components onto a tiny wafer of silicone, we’ve had to make everything really small. Like, incomprehensibly small. Components are only a few nanometers wide, with less than a dozen nanometers between them. For some comparison, a sheet of paper is about 100,000 nanometers thick. We are designing components so small that they will soon be only a few atoms across. At that point electrons begin to bleed from one transistor into another, because of a quantum effect called ‘quantum tunneling,’ and a switch that can’t be reliably turned off is no switch at all.

Experts differ on how soon the average consumer will begin to feel the effects of this limitation, but most predict we have less than a decade to find a solution or the technological progress we’ve been experiencing will grind to a stop.

What technology is likely to replace the silicon chip? That is exactly the question companies like IBM, Intel, and even NASA are racing to answer.

IBM is working on a project that aims to replace silicon transistors with ones made of carbon nanotubes. The change in materials would allow manufacturers to reduce the space between transistors from 14 nanometers to just three, allowing us to cram even more transistors onto a single chip before running into the electron-bleed effect we are hitting with silicon.

Another idea with enormous potential, the quantum computer, was first proposed back in 1968, but has only recently become a reality. Whereas the binary nature of our current digital technology only allows for a switch to be in two distinct positions, on or off, the status of switches in a quantum computer are determined by the superpositional states of a quantum particle, which, because of the weirdness of quantum mechanics, can be in the positions of on, off or both – simultaneously! The information contained in one quantum switch is called a ‘qubit,’ as opposed to the binary ‘bit’ of today’s digital computers.

At their Quantum Artificial Intelligence Laboratory (QuAIL) in Silicon Valley, NASA, in partnership with Google Research and a coalition of 105 colleges and universities, has built the D-Wave 2X, a second-generation, 1,097-qubit quantum computer. Although it’s difficult to do a direct qubit-to-bit comparison because they are so fundamentally different, Google Research has released some data on its performance. They timed how long it takes the D-Wave 2X to do certain high-level calculations and compared the timings with those of a modern, silicon-based computer doing the same calculations. According to their published results, the D-Wave 2X is 100 million times faster than the computer on which you are currently reading this.

Whatever technology eventually replaces the silicon chip, it will be orders of magnitude better, faster and more powerful than what we have today, and it will have an unimaginable impact on the fields of computing, space exploration and artificial intelligence – not to mention the ways in which it will transform our ordinary, everyday lives.

Welcome to the beginning of the computer age, all over again.

TECH TALK: Bug hunting in the late 20th century

(image credit: XDanielx – public domain via Wikimedia Commons)

ERIC’S TECH TALK

by Eric W. Austin
Computer Technical Advisor

The year is 1998. As the century teeters on the edge of a new millennium, no one can stop talking about Monica Lewinsky’s dress. September 11, 2001, is still a long ways off, and the buzz in the tech bubble is all about the Y2K bug.

I was living in California at the time, and one of my first projects, in a burgeoning technical career, was working on this turn of the century technical issue. Impacting the financial sector especially hard, which depends upon highly accurate transactional data, the Y2K bug forced many companies to put together whole departments whose only responsibility was to deal with it.

I joined a team of about 80 people as a data analyst, working directly with the team leader to aggregate data on the progress of the project for the vice president of the department.

Time Magazine cover from January 1999

Born out of a combination of the memory constraints of early computers in the 1960s and a lack of foresight, the Y2K bug was sending companies into a panic by 1998.

In the last decade, we’ve become spoiled by the easy availability of data storage. Today, we have flash drives that store gigabytes of data and can fit in our pocket, but in the early days of computing data-storage was expensive, requiring huge server rooms with 24-hour temperature control. Programmers developed a number of tricks to compensate. Shaving off even a couple of bytes from a data record could mean the difference between a productive program and a crashing catastrophe. One of the ways they did this was by storing dates using only six digits – 11/09/17. Dropping the first two digits of the year from hundreds of millions of records meant significant savings in expensive data-storage.

This convention was widespread throughout the industry. It was hard-coded into programs, assumed in calculations, and stored in databases. Everything had to be changed. The goal of our team was to identify every instance where a two-digit year was used, in any application, query or table, and change it to use a four-digit year instead. This was more complicated than it sounds, as many programs and tables had interdependencies with other programs and tables, and all these relationships had to be identified first, before changes could be made. Countrywide Financial, the company that hired me, was founded in 1969 and had about 7,000 employees in 1998. We had 30 years of legacy code that had to be examined line by line, tested and then put back into production without breaking any other functionality. It was an excruciating process.

It was such a colossal project there weren’t enough skilled American workers to complete the task in time, so companies reached outside the U.S. for talent. About 90 percent of our team was from India, sponsored on a special H-1B visa program expanded by President Bill Clinton in October of ’98, specifically to aid companies in finding enough skilled labor to combat the Y2K bug.

For a kid raised in rural New England, this was quite the culture shock, but I found it fascinating. The Indians spoke excellent English, although for most of them Hindi was their first language, and they were happy to answer my many questions about Indian culture.

I immediately became good friends with my cube-mate, an affable young Indian man and one of the team leaders. On my first day, he told me excitedly about being recently married to a woman selected by his parents while he had been working here in America. He laughed at my shock after explaining he had spoken with his bride only once – by telephone – before the wedding.

About a month into my contract, my new friend invited me to share dinner with him and his family. I was excited for my first experience of true Indian home-cooking.

By and large, Californians aren’t the most sociable neighbors. Maybe it’s all that time stuck in traffic, but it’s not uncommon to live in an apartment for years and never learn the name of the person across the hall. Not so in Srini’s complex!

Srini lived with a number of other Indian men and their families, also employed by Countrywide, in a small apartment complex in Simi Valley, about 20 minutes down the Ronald Reagan Freeway from where I lived in Chatsworth, on the northwest side of Los Angeles County.

I arrived in my best pressed shirt, and found that dinner was a multi-family affair. At least a dozen other people, from other Indian families living in nearby apartments – men, women, and children – gathered in my friend’s tiny living room.

The men lounged on the couches and chairs, crowded around the small television, while the women toiled in the kitchen, gossiping in Hindi and filling the tiny apartment with the smells of curry and freshly baking bread.

At dinner, I was surprised to find that only men were allowed to sit around the table. Although they had just spent the past two hours preparing the meal, the women sat demurely in chairs placed against the walls of the kitchen. When I offered to make room for them, Srini politely told me they would eat later.

I looked in vain for a fork or a spoon, but there were no utensils. Instead, everyone ate with their fingers. Food was scooped up with a thick, flatbread called Chapati. Everything was delicious.

Full of curry, flatbread, and perhaps a bit too much Indian beer, Srini and his wife walked me back to my car after dinner. Unfortunately, when Srini’s wife gave me a slight bow of farewell, a tad too eager to demonstrate my cultural savoir-faire, I mistook her bow for a French la bise instead. Bumped foreheads and much furious blushing resulted. Later, I had to apologize to Srini for attempting to kiss his wife. He thought it was hilarious.

Countrywide survived the Y2K bug, although the company helped bring down the economy a decade later. Srini moved on to other projects within the company, as did I. The apocalypticists would have to wait until 2012 to predict the end of the world again, but the problems – and opportunities – created by technology have only grown in the last 17 years: driverless cars, Big Data, and renegade A.I. – to deal with these problems, and to exploit the opportunities they open up for us, it will take a concerted effort from the brightest minds on the planet.

Thankfully, they’re already working on it.

Here at Tech Talk we take a look at the most interesting – and beguiling – issues in technology today. Eric can be reached at ericwaustin@gmail.com, and don’t forget to check out previous issues of the paper online at townline.org.

TECH TALK: Virtual Money – The next evolution in commerce

ERIC’S TECH TALK

by Eric Austin
Technical Consultant

Commerce began simply enough. When roving bands of hardly-human migratory hunters met in the Neolithic wilderness, it was only natural that they compare resources and exchange goods. The first trades were simple barters: two beaver skins and a mammoth tusk for a dozen arrowheads and a couple of wolf pelts.

As people settled down and built cities, there was a need to standardize commerce. In ancient Babylon, one of our earliest civilizations, barley served as a standard of measurement. The smallest monetary unit, the ‘shekel,’ was equal to 180 grains of barley.

The first coins appeared not long after. Initially, a coin was worth the value of the metal it was minted from, but eventually its intrinsic value separated from its representational value. When the state watered down the alloy of a gold coin with baser metals, such as tin or copper, they invented inflation. With the introduction of paper money, first in China in the 7th century CE and later in medieval Europe, the idea of intrinsic worth was done away with entirely for a representational value dictated by the state.

In the 19th and 20th centuries, corporations took over from the state as the main drivers in the evolution of commerce. Then, in the 1960s, the foundations of e-commerce were laid down with the establishment of the Electronic Data Interchange (EDI). The EDI defines the standards for transactions between two electronic devices on a network. It was initially developed out of Cold War military strategic thinking, specifically the need for logistical coordination of transported goods during the 1948 Berlin Airlift.

Worry about the security of such communication kept it from being used for financial transactions until 1994, when Netscape, an early browser technology company and the foundation of browsers such as Firefox, invented Secure Socket Layers (SSL) encryption, a cryptographic protocol that provides communications security for computers over a network. After this breakthrough, various third parties began providing credit card processing services. A short time later, Verisign developed the first unique digital identifier, or SSL certificate, to verify merchants. With that our current system for online commerce was complete.

So why is Internet security still such a constant worry? Part of the problem is embedded in the structure of the Internet itself. The Internet is first and foremost designed to facilitate communication, and its openness and decentralized structure is paradoxical to the financial sector, which depends on the surety of a centralized authority overseeing all transactions. Most of our existing security issues on the internet are a consequence of these diametrically opposed philosophies.

Cryptocurrencies are the result of thinking about money with an Internet mindset. Classified as a virtual currency, cryptocurrencies such as Bitcoin aim to solve a number of problems present in our current online transactional system by embracing the decentralized structure of the Internet and by lifting some novel concepts from cryptography, the study of encryption and code-breaking.

Introduced in 2009, Bitcoin was the world’s first virtual currency. Bitcoin tackles the security issues of our current system by decentralizing its transaction data. Bitcoin’s public ledger is called a ‘blockchain,’ with each block in the chain representing a financial transaction. The database is designed to prevent data alteration by building references to other transactions into each record. To alter one record, a hacker would need to alter every other record that references it in order to avoid detection.

And since the database is maintained by every computer participating in that chain of transactions, any data altered on one computer would be immediately detected by every other computer on the network. This ‘decentralized data’ concept eliminates the big weakness in our current system. Today, the control of data is concentrated in a few centralized institutions, and if the security of any one of those institutions is penetrated, the entire system becomes compromised.

Beyond creating a secure financial transaction system for the World Wide Web, another goal of cryptocurrencies is to reduce or even eliminate financial fees by removing the need for a middleman overseeing the transaction. Since no centralized banking authority is necessary to track transactions, many of the costs associated with the involvement of banking institutions disappear. This has made Bitcoin the preferred currency for moving money around the world, as it can be done with a minimum of bureaucratic fees. Western Union currently charges 7-8 percent transfer cost per $100. For migrant workers sending money home to their families, that’s a big hit.

With no personal, identifying information recorded as part of a Bitcoin transaction, it provides a level of anonymity not possible with our current system. However, as pointed out by MIT researchers, this anonymity only extends as far as the merchant accepting the transaction, who may still tag transaction IDs with personal customer info.

The anonymous nature of Bitcoin transactions is a boon to the security of consumers, but it presents a real problem for law enforcement. Bitcoin has become the favored currency for criminal activity. Kidnappers frequently insist on payment in Bitcoin. The WannaCry virus that attacked 200,000 computers in 150 countries earlier this year required victims to pay in Bitcoin.

The value of Bitcoin has steadily increased since it was introduced almost 10 years ago. In January 2014, one bitcoin was worth $869.61. As I write this in October 2017, that same bitcoin is valued at $5,521.32, an increase of more than 500 percent in just three years. With approximately 16 million bitcoins in circulation, the total current value of the Bitcoin market is almost $92 billion. The smallest unit of Bitcoin is called a ‘satoshi,’ worth 1 millionth of a bitcoin.

WannaCry isn’t the only cyberthreat to leverage Bitcoin either. Since Bitcoin is designed to reward computers which keep its database updated with new bitcoins, some malicious programmers have created viruses that hijack your computer in order to force it to mine bitcoins. Most people are not even aware this has happened. There may simply be a process running in the background, slowing down your PC, and quietly depositing earned bitcoins into a hacker’s digital wallet.

The benefits to be gained by this revolution in commerce – security, anonymity, and the elimination of the need for a financial middleman – are great, but the risks are not to be dismissed either. Even as the anonymous nature of cryptocurrencies provide the consumer with greater security and lower costs, it creates a haven for criminals and makes it more difficult for law enforcement to track cybercrime.

Whether Bitcoin sticks around or disappears to be replaced with something else, the philosophy and technology behind it will transform the financial sector in the decades to come. Our current internet commerce model is a slapdash attempt to stick an old system onto the new digital world of the Internet and cannot last. The road to a new financial reality is bound to be a rocky one, as banking institutions are not likely to accept the changes – and the recession of their influence – easily. But, as shown by the recent Equifax hack, which exposed the personal information of 143 million Americans, maybe trusting our financial security to a few, centralized institutions isn’t such a great idea. And maybe cryptocurrencies are part of the answer.

TECH TALK: A.I. on the Road: Who’s Driving?

ERIC’S TECH TALK

by Eric Austin
Computer Technical Advisor

In an automobile accident – in the moments before your car impacts an obstacle, in the seconds before glass shatters and steel crumples — we usually don’t have time to think, and are often haunted by self-recriminations in the days and weeks afterward. Why didn’t I turn? Why didn’t I hit the brakes sooner? Why’d I bother even getting out of bed this morning?

Driverless cars aim to solve this problem by replacing the human brain with a silicon chip. Computers think faster than we do and they are never flustered — unless that spinning beach ball is a digital sign of embarrassment? — but the move to put control of an automobile in the hands of a computer brings with it a new set of moral dilemmas.

Unlike your personal computer, a driverless car is a thinking machine. It must be capable of making moment-to-moment decisions that could have real life-or-death consequences.

Consider a simple moral quandary. Here’s the setup: It’s summer and you are driving down Lakeview Drive, headed toward the south end of China Lake. You pass China Elementary School. School is out of session so you don’t slow down, but you’ve forgotten about the Friend’s Camp, just beyond the curve, where there are often groups of children crossing the road, on their way to the lake on the other side. You round the curve and there they are, a whole gang of them, dressed in swim suits and clutching beach towels. You hit the brakes and are shocked when they don’t respond. You now have seven-tenths of a second to decide: do you drive straight ahead and strike the crossing kids or avoid them and dump your car in the ditch?

Not a difficult decision, you might think. Most of us would prefer a filthy fender to a bloody bumper. But what if instead of a ditch, it was a tree, and the collision killed everyone in the car? Do you still swerve to avoid the kids in the crosswalk and embrace an evergreen instead? What if your own children were in the car with you? Would you make the same decision?

If this little thought exercise made you queasy, that’s okay. Imagine how the programmers building the artificial intelligence (A.I.) that dictates the behavior of driverless cars must feel.

There may be a million to one chance of this happening to you, but with 253 million cars on the road, it will happen to someone. And in the near future, that someone might be a driverless car. Will the car’s A.I. remember where kids often cross? How will it choose one life over another in a zero-sum game?

When we are thrust into these life-or-death situations, we often don’t have time to think and react mostly by instinct. A driverless car has no instinct, but can process millions of decisions a second. It faces the contradictory expectations of being both predictable and capable of reacting to the unexpected.

That is why driverless cars were not possible before recent advances in artificial intelligence and computing power. Rather than traditionally linear, conditional-programming techniques of the past (eg: If This Then That), driverless cars employ a new field of computer science called “machine learning,” which utilizes more human-like functions, such as pattern-recognition, and can update its own code based on past results in order to attain better accuracy in the future. Basically, the developers give the A.I. a series of tests, and based on its success or failure in those tests, the A.I. updates its algorithms to improve its success rate.

That is what is happening right now in San Francisco, Boston, and soon New York. Las Vegas is testing a driverless bus system. These are opportunities for the driverless A.I. to encounter real-life situations and learn from those encounters before the technology is rolled out to the average consumer.

The only way we learn is from our mistakes. That is true of driverless cars, too, and they have made a few. There have been hardware and software failures and unforeseen errors. In February 2016, a Google driverless car experienced its first crash, turning into the path of a passing bus. In June 2016, a man in a self-driving Tesla was killed when the car tried to drive at full speed under a white tractor trailer crossing in front of it. The white trailer against the smoky backdrop of a cloudy sky fooled the car. The occupant was watching Harry Potter on the car’s television screen and never saw it coming.

Mistakes are ubiquitous in our lives; “human error” has become cliché. But will we be as forgiving of such mistakes when they are made by a machine? Life is an endless series of unfortunate coincidences, and no one can perfectly predict every situation. But, lest I sound like Dustin Hoffman in the film Rain Man, quoting plane crash statistics, let me say I am certain studies will eventually show autonomous vehicles reduce overall accident rates.

Also to be considered are the legal aspects. If a driverless car strikes a pedestrian, who is responsible? The owner of the driverless car? The car manufacturer? The developer of the artificial intelligence governing the car’s behavior? The people responsible for testing it?

We are in the century of A.I., and its first big win will be the self-driving car. The coming decade will be an interesting one to watch.

Get ready to have a new relationship with your automobile.

Eric can be emailed at ericwaustin@gmail.com.

TECH TALK: The Equifax Hack – What you need to know

ERIC’S TECH TALK

by Eric Austin
Computer Technical Advisor

Do you have a coin? Flip it. Tails, you are about to be the victim of identity theft. Heads, you’re safe — maybe. That’s the situation created by the recent Equifax data breach.

The hack exposed the personal information of 143 million Americans. That’s half of everyone in America. Names and addresses, Social Security numbers, birth dates, and even driver’s license numbers were stolen, as well as 209,000 credit card numbers.

“This is about as bad as it gets,” Pamela Dixon, executive director of the World Privacy Forum, a nonprofit research group, told the New York Times. “If you have a credit report, chances are you may be in this breach. The chances are much better than 50 percent.”

As a precaution, the widespread advice from financial advisers is to request a freeze of your credit from each of the three big credit reporting agencies: TransUnion, Experian and Equifax. Each freeze request will cost you $10 – although, after some seriously negative press, Equifax has decided to wave their fee until November 21.

The details of the hack and Equifax’s handling of it are also concerning. According to the Times, Equifax detected the breach in July, but didn’t warn consumers until September. It’s estimated hackers had access to Equifax data from mid-May until July 29, before the hack was finally discovered.

The New York Post first revealed the cause of the breach: a vulnerability in the software package Apache Struts, an open-source, web development framework used by many big financial institutions. The developer of the software discovered the vulnerability back in March, and issued a fix for the problem, but Equifax neglected to update their systems.

After the public announcement in September, Equifax set up a website, Equifaxsecurity2017.com, where consumers can check to see if they are among those affected. According to the company, at the site you can “determine if your information was potentially impacted by this incident.”

You can also sign up for a free year of identity protection through their service, TrustedID. Initially, Equifax received some backlash when it was discovered that consumers signing up for the program were forced to agree to a “terms of service” that waived their rights to sue for damages. The language has since been altered, and Equifax recently released a statement insisting that using the service will not require individuals to give up any of their rights to participate in a class-action lawsuit.

Other troubling reports have come to light as well. The day after Equifax discovered the data breach – but over a month before it was disclosed to the public – three Equifax executives, including the company’s chief financial officer, unloaded nearly two million in corporate stock. The company’s stock value has fallen more than 35 percent in the days since, and Congress is calling for an investigation into possible insider trading.

Equifax’s recent activities in Washington have only added to the bad press. In the months leading up to the hack, Equifax was busy lobbying Washington to relax the regulations and safeguards on the credit reporting industry. According to The Philadelphia Inquirer, the company spent more than $500,000 seeking to influence lawmakers on issues such as “data security and breach notification” and “cybersecurity threat information sharing” in the first six months of 2017.

This includes an effort to repeal federal regulations upholding a consumer’s right to sue credit reporting companies. In July, as reported by the Consumerist, an arm of Consumer Reports, Congress passed the Congressional Review Act in a slim, party-line vote. If upheld by the Senate and signed by the President, the resolution would overturn certain rules created by the Consumer Financial Protection Bureau to regulate the financial industry. This agency was set up as a safeguard for consumers after the financial crash of 2007-08. Among the rules under danger of repeal are measures meant to protect consumers by “curbing the use of ‘forced arbitration’ in many consumers’ financial contracts.”

And Equifax is likely to profit from this act of negligence, as it fuels existing paranoia about online privacy and will inspire millions to spend money on the pseudo-security of identity protection services, including Equifax’s own TrustedID.

The fallout from this hack is still being assessed, and likely won’t be fully known for years, if ever. This is the Deepwater Horizon of data breaches, and it should serve as a similar wake-up call for consumers.

We need a higher standard of accountability in the financial industry. These institutions no longer simply protect our money. Now they guard our very identities. Their servers should be as secure as their bank vaults. Money is replaceable, but most of us have only the one identity.

TECH TALK: Welcome to the world of Big Data

ERIC’S TECH TALK

by Eric Austin
Computer Technical Advisor

 

What exactly is Big Data? Forbes defines it as “the exponential explosion in the amount of data we have generated since the dawn of the digital age.”

Harvard researchers, Erez Aiden and Jean-Baptiste Michel, explore this phenomenon in their book, Uncharted: Big Data as a Lens on Human Culture. They note, “If we write a book, Google scans it; if we take a photo, Flickr stores it; if we make a movie, YouTube streams it.”

And Big Data is more than just user created content from the digital era. It also includes previously published books that are now newly-digitized and available for analysis.

Together with Google, Aiden and Michel have created the Google Ngram Viewer, a free online tool allowing anyone to search for n-grams, or linguistic phrases, in published works and plot their occurrence over time.

Since 2004 Google has been scanning the world’s books and storing their full text in a database. To date, they have scanned 15 million of the 129 million books published between 1500 and 2008. From this database, researchers created a table of two billion phrases, or n-grams, which can be analyzed by the year of the publication of the book in which they appear. Such analysis can provide insight into the evolution of language and culture over many generations.

As an example, the researchers investigated the phrase “the United States are” versus “the United States is.” When did we start referring to the United States as a singular entity, rather than a group of individual states? Most linguists think this change occurred after the Civil War in 1865, but from careful analysis with the Google Ngram Viewer, it is clear this didn’t take off until a generation later in the 1880s.

Author Seth Stephens-Davidowitz thinks the internet has an even greater resource for understanding human behavior: Google searches. Whenever we do a search on Google, our query is stored in a database. That database of search queries is itself searchable using the online tool Google Trends. Stephens-Davidowitz found this data so interesting he wrote his dissertation on it, and now has written a book: Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are.

Google Trends doesn’t just tell us what people are searching for on the internet, it also tells us where those people live, how old they are, and what their occupation is. Clever analysts can cross-index this data to tell us some interesting facts about ourselves. Stephens-Davidowitz argues this data is even more accurate than surveys because people lie to other people, but not to Google.

In his book, Everybody Lies, Stephens-Davidowitz reports that on the night of Obama’s election in 2008, one out of a hundred Google searches containing the word “Obama” also contained the word “nigger” or “KKK.” But who was making those searches? Are Republicans more racist than Democrats? Not according to the data. Stephens-Davidowitz says there were a similar number of these type of searches in Democratically dominant areas of the country as in Replublican ones. The real divide is not North/South or Democrat/Republican, he asserts, but East/West, with a sharp drop off in states west of the Mississippi River.

Stephens-Davidowitz even suggests Google Trends can offer a more accurate way of predicting vote outcomes than exit polling. By looking at searches containing the names of both candidates in the 2016 election, he found that the order in which the names appear in a search may demonstrate voter preference. In key swing states, there were a greater number of searches for “Trump Clinton” versus “Clinton Trump,” indicating a general movement toward the Republican candidate. This contradicted much of the polling data at the time, but turned out to be a more accurate barometer of candidate preference.

The world of Big Data is huge and growing larger every day. Researchers and scientists are finding new and better ways of analyzing it to tell us more about the most devious creatures on this planet. Us.

But we must be careful of the seductive lure of Big Data, and we should remember the words immortalized by Mark Twain: “There are three kinds of lies: lies, damn lies, and statistics.”

TECH TALK: Internet “outing” – social conscience or vigilante justice?

ERIC’S TECH TALK

by Eric Austin
Computer Technical Advisor

A couple of weeks ago, a violent clash broke out between protesters and counter-protesters in Charlottesville, Virginia. The violence occurred at a rally organized by white nationalists, angry at the imminent removal of a memorial for Confederate General Robert E. Lee.

I was home and watching it unfold as it happened. It was chilling to see footage of hundreds of men marching six abreast, torches held high and chanting “Blood and soil!” and “Jews will not replace us!”

Later in the day, reports came in that one of the white nationalists had rammed his car into a crowd of counter-protesters, killing a young woman and injuring many more. The moment was captured on video and played ad nauseum in the news media.

An observant twitter user noted the major difference between racists of the past and those marching in Charlottesville: they no longer bothered with the iconic white robes and conical hoods. Their faces were plain to see.

Instead of a few grainy pictures on the front page of the Evening Post, thousands of photos and live video got posted to the internet.

The following day a tweet popped up in my twitter feed. It was an appeal for help in identifying individuals from the photos and video that had been circulating the internet and cable news channels. Full of righteous indignation, I liked and retweeted it.

Most of us have online profiles available for public view with our real names attached to a photo, and often to a place of employment or school, or even to the names of other people we know. On sites like Facebook, LinkedIn or Instagram. Also in less obvious places like school alumni pages and business websites that list employees. Even our Amazon profiles have information about us. We leave our digital fingerprints everywhere.

On Monday, reports continued to pour in. One of the white nationalists had been identified and his employer began receiving complaining calls. He was fired.

Another young man’s family, after he was outed on twitter, publicly disowned him in a letter sent to their local paper – which was then broadcast worldwide on the web. His nephew gave interviews to the press. “Our relatives were calling us in a panic earlier today,” he said, “demanding we delete all Facebook photos that connect us to them.”

This is all for the best, I thought to myself. Racism is wrong. White nationalism is destructive. Surely, the best way of dealing with such views is to shine a light on them.

The practice of publishing identifying information to the internet, called “doxing,” has grown over recent years. It appears in forms both arguably beneficial (exposure of government or corporate corruption) and utterly malicious (revenge porn).

Within days, the New York Times was reporting on one poor man in Arkansas, who had been misidentified by over-zealous internet sleuths. His inbox quickly filled with messages of vulgarity and hate. Ironically, this was in reaction to similar sentiments displayed in Charlottesville just a few days earlier.

I have always found myself coming down on the side of Benjamin Franklin, who said, “It is better 100 guilty persons should escape [justice] than that one innocent person should suffer.”

It’s a maxim Franklin applied to our criminal justice system, but I think it’s relevant here.

If you attend a neo-Nazi rally and decide not to bring your pointy hood, you risk family and friends seeing your face plastered all over the news.

But let’s not allow the internet’s version of mob mentality to dictate the rules for our society.

There is a reason John Adams insisted “we are a nation of laws, not of men.” There is a reason our Founding Fathers chose to make this nation a constitutional republic instead of one ruled only by the majority.

The internet is a powerful tool, but one better used to facilitate dialogue with others, and not as a weapon to bludgeon them. The internet may be a billion voices, but it can also wear a billion boots. Let’s not trample the innocent in our mad rush to condemn the justifiably horrific.

If you’d like to be my third follower on twitter, you can find me @realEricAustin or email me at ericwaustin@gmail.com.

TECH TALK: How technology could save our Republic

Gerry-mandering explained. (image credit: Washington Post)

ERIC’S TECH TALK

by Eric W. Austin
Computer Technical Advisor

Elbridge Gerry, second-term governor of Massachusetts, is about to do something that will make his name both legendary and despised in American partisan politics. It’s two months after Christmas, in a cold winter in the year 1812.

Typical of a politician, the next election is forefront in his mind. And Gerry has reason to worry.

Elections in those days were a yearly affair. Between 1800 – 1803 Gerry had lost four elections in a row to Federalist Caleb Strong. He didn’t dare run again until after Strong’s retirement in 1807. Three years later, though, Elbridge Gerry gathered his courage and tried again.

This time he won.

Gerry was a Democratic-Republican, but during his first term the Federalists had control of the Massachusetts legislature, and he gained a reputation for championing moderation and rational discourse.

However, in the next election cycle his party gained control of the Senate and things changed. Gerry became much more partisan, purging many of the Federalist appointees from the previous cycle and enacting so-called “reforms,” increasing the number of judicial appointments, which he then filled with Republican flunkies.

The irony of this is that Gerry had been a prominent figure in the U.S. Constitutional Convention in the summer of 1787, where he was a vocal advocate of small government and individual liberties. He even angrily quit the Massachusetts’ ratifying convention in 1788 after getting into a shouting match with convention chair, Francis Dana, primarily over the lack of a Bill of Rights. (The first 10 amendments later became the Bill of Rights.)

But none of this is what Elbridge Gerry is remembered for.

That came in the winter of 1812 when he signed into law a bill redrawing voting districts in such a way that it gave electoral advantage to his own Democratic-Republican party.

Political cartoon from the early 1800’s.

The move was highly successful from a political standpoint, but unpopular. In the next election, Gerry’s Democratic-Republican party won all but 11 seats in a State Senate that had – only the year before – been controlled by the Federalists. This, despite losing a majority of the seats in the House by a wide margin, and the governorship as well as: his old Federalist nemesis, Caleb Strong, came out of retirement to defeat him.

According to a legendary account from the period, someone posted a map of the newly-drawn districts on the wall in the offices of the Boston Gazette. One of the editors pointed to the district of Essex and remarked that its odd shape resembled a salamander. Another editor exclaimed, “A salamander? Call it a Gerry-mander!”

Thus the first “Gerry-mander” was born.

Today the process of redrawing district boundaries in such a way as to favor one party over another is referred to as “gerrymandering.”

It is mandated in the Constitution that states be divided into districts of equal population. So, every ten years when a census is taken, states redraw voting districts based on population changes. In many states, the party that controls the state legislature at the time also dictates this. Predictably, gerrymandering is most dominant in these states.

By strategically drawing the district lines to give the ruling party election-advantage, that party can maintain their legislative power even if the majority of the population moves away from them in the following years.

According to a 2014 study conducted by The Washington Post, Republicans are currently responsible for drawing eight out of ten of the most gerrymandered districts in the U.S. This has resulted in the Democrats being under-represented by about 18 seats in the U.S. House of Representatives “relative to their vote share in the 2012 election.”

The most gerrymandered districts in the United States. (image credit: Washington Post)

Maine is one of the few states that has given this decision to an independent, bipartisan commission instead. That commission then sends a proposal for approval to the state legislature. Of course, we have it a bit easier, with only two districts to worry about.

For much of the nation, gerrymandering is still one of the most prevalent and democratically destructive practices in politics today.

It’s also notoriously difficult to eradicate.

The problem is that someone has to decide on the districts. And everyone is biased.

Even in the few cases where legal action has been brought against an instance of partisan gerrymandering, how does one prove that bias in a court of law? The quandary is this: in order to prove a district was drawn with bias intent, one must first provide an example of how the district would look if drawn without bias. But since all districts are drawn by people, there is no such example to use.

Because of this difficulty, in 2004 the Supreme Court ruled that such a determination constitutes an “unanswerable question.”

But that may be about to change.

There is currently a major redistricting case before the Supreme Court. Professor Steve Vladeck, of Texas University’s School of Law, calls it “the biggest and most important election law case in decades.” It involves a gerrymandered district in Wisconsin.

The reason the courts are now taking these cases more seriously is because of recent advances in computer-powered analytics: technology may finally provide that elusive example of an unbiased district.

This week, August 7-11, a team of mathematicians at Tufts University is holding a conference on the “Geometry of Redistricting” to look at this very problem.

A number of mathematical algorithms have already been proposed to remove the human-factor from the process of redistricting.

Brian Olson, a software engineer from Massachusetts, has developed an algorithm which draws districts based on census blocks. His approach aims to make districts as compact as possible while maintaining neighborhood integrity.

The debate is still going on about which factors are most essential to a redistricting algorithm, but eventually one method will become standard and the days of gerrymandering will be over.

Poor Elbridge Gerry. After losing the Massachusetts governorship, he became vice president under James Madison and then died in office, becoming the only signer of the Declaration of Independence to be buried in America’s capital. But he’s mostly remembered for the despised political practice that bears his name. Hopefully, soon even that will be forgotten.

Good riddance Elbridge Gerry, I say. Good riddance, sir!

The difference a computer makes: The top image shows the districts of North Carolina as they are drawn today. The bottom image are districts drawn by an unbiased computer algorithm. Which looks more fair to you? (image credit: Washington Post)