TECH TALK: Bug hunting in the late 20th century

(image credit: XDanielx – public domain via Wikimedia Commons)


by Eric W. Austin
Computer Technical Advisor

The year is 1998. As the century teeters on the edge of a new millennium, no one can stop talking about Monica Lewinsky’s dress. September 11, 2001, is still a long ways off, and the buzz in the tech bubble is all about the Y2K bug.

I was living in California at the time, and one of my first projects, in a burgeoning technical career, was working on this turn of the century technical issue. Impacting the financial sector especially hard, which depends upon highly accurate transactional data, the Y2K bug forced many companies to put together whole departments whose only responsibility was to deal with it.

I joined a team of about 80 people as a data analyst, working directly with the team leader to aggregate data on the progress of the project for the vice president of the department.

Time Magazine cover from January 1999

Born out of a combination of the memory constraints of early computers in the 1960s and a lack of foresight, the Y2K bug was sending companies into a panic by 1998.

In the last decade, we’ve become spoiled by the easy availability of data storage. Today, we have flash drives that store gigabytes of data and can fit in our pocket, but in the early days of computing data-storage was expensive, requiring huge server rooms with 24-hour temperature control. Programmers developed a number of tricks to compensate. Shaving off even a couple of bytes from a data record could mean the difference between a productive program and a crashing catastrophe. One of the ways they did this was by storing dates using only six digits – 11/09/17. Dropping the first two digits of the year from hundreds of millions of records meant significant savings in expensive data-storage.

This convention was widespread throughout the industry. It was hard-coded into programs, assumed in calculations, and stored in databases. Everything had to be changed. The goal of our team was to identify every instance where a two-digit year was used, in any application, query or table, and change it to use a four-digit year instead. This was more complicated than it sounds, as many programs and tables had interdependencies with other programs and tables, and all these relationships had to be identified first, before changes could be made. Countrywide Financial, the company that hired me, was founded in 1969 and had about 7,000 employees in 1998. We had 30 years of legacy code that had to be examined line by line, tested and then put back into production without breaking any other functionality. It was an excruciating process.

It was such a colossal project there weren’t enough skilled American workers to complete the task in time, so companies reached outside the U.S. for talent. About 90 percent of our team was from India, sponsored on a special H-1B visa program expanded by President Bill Clinton in October of ’98, specifically to aid companies in finding enough skilled labor to combat the Y2K bug.

For a kid raised in rural New England, this was quite the culture shock, but I found it fascinating. The Indians spoke excellent English, although for most of them Hindi was their first language, and they were happy to answer my many questions about Indian culture.

I immediately became good friends with my cube-mate, an affable young Indian man and one of the team leaders. On my first day, he told me excitedly about being recently married to a woman selected by his parents while he had been working here in America. He laughed at my shock after explaining he had spoken with his bride only once – by telephone – before the wedding.

About a month into my contract, my new friend invited me to share dinner with him and his family. I was excited for my first experience of true Indian home-cooking.

By and large, Californians aren’t the most sociable neighbors. Maybe it’s all that time stuck in traffic, but it’s not uncommon to live in an apartment for years and never learn the name of the person across the hall. Not so in Srini’s complex!

Srini lived with a number of other Indian men and their families, also employed by Countrywide, in a small apartment complex in Simi Valley, about 20 minutes down the Ronald Reagan Freeway from where I lived in Chatsworth, on the northwest side of Los Angeles County.

I arrived in my best pressed shirt, and found that dinner was a multi-family affair. At least a dozen other people, from other Indian families living in nearby apartments – men, women, and children – gathered in my friend’s tiny living room.

The men lounged on the couches and chairs, crowded around the small television, while the women toiled in the kitchen, gossiping in Hindi and filling the tiny apartment with the smells of curry and freshly baking bread.

At dinner, I was surprised to find that only men were allowed to sit around the table. Although they had just spent the past two hours preparing the meal, the women sat demurely in chairs placed against the walls of the kitchen. When I offered to make room for them, Srini politely told me they would eat later.

I looked in vain for a fork or a spoon, but there were no utensils. Instead, everyone ate with their fingers. Food was scooped up with a thick, flatbread called Chapati. Everything was delicious.

Full of curry, flatbread, and perhaps a bit too much Indian beer, Srini and his wife walked me back to my car after dinner. Unfortunately, when Srini’s wife gave me a slight bow of farewell, a tad too eager to demonstrate my cultural savoir-faire, I mistook her bow for a French la bise instead. Bumped foreheads and much furious blushing resulted. Later, I had to apologize to Srini for attempting to kiss his wife. He thought it was hilarious.

Countrywide survived the Y2K bug, although the company helped bring down the economy a decade later. Srini moved on to other projects within the company, as did I. The apocalypticists would have to wait until 2012 to predict the end of the world again, but the problems – and opportunities – created by technology have only grown in the last 17 years: driverless cars, Big Data, and renegade A.I. – to deal with these problems, and to exploit the opportunities they open up for us, it will take a concerted effort from the brightest minds on the planet.

Thankfully, they’re already working on it.

Here at Tech Talk we take a look at the most interesting – and beguiling – issues in technology today. Eric can be reached at, and don’t forget to check out previous issues of the paper online at

TECH TALK: Virtual Money – The next evolution in commerce


by Eric Austin
Technical Consultant

Commerce began simply enough. When roving bands of hardly-human migratory hunters met in the Neolithic wilderness, it was only natural that they compare resources and exchange goods. The first trades were simple barters: two beaver skins and a mammoth tusk for a dozen arrowheads and a couple of wolf pelts.

As people settled down and built cities, there was a need to standardize commerce. In ancient Babylon, one of our earliest civilizations, barley served as a standard of measurement. The smallest monetary unit, the ‘shekel,’ was equal to 180 grains of barley.

The first coins appeared not long after. Initially, a coin was worth the value of the metal it was minted from, but eventually its intrinsic value separated from its representational value. When the state watered down the alloy of a gold coin with baser metals, such as tin or copper, they invented inflation. With the introduction of paper money, first in China in the 7th century CE and later in medieval Europe, the idea of intrinsic worth was done away with entirely for a representational value dictated by the state.

In the 19th and 20th centuries, corporations took over from the state as the main drivers in the evolution of commerce. Then, in the 1960s, the foundations of e-commerce were laid down with the establishment of the Electronic Data Interchange (EDI). The EDI defines the standards for transactions between two electronic devices on a network. It was initially developed out of Cold War military strategic thinking, specifically the need for logistical coordination of transported goods during the 1948 Berlin Airlift.

Worry about the security of such communication kept it from being used for financial transactions until 1994, when Netscape, an early browser technology company and the foundation of browsers such as Firefox, invented Secure Socket Layers (SSL) encryption, a cryptographic protocol that provides communications security for computers over a network. After this breakthrough, various third parties began providing credit card processing services. A short time later, Verisign developed the first unique digital identifier, or SSL certificate, to verify merchants. With that our current system for online commerce was complete.

So why is Internet security still such a constant worry? Part of the problem is embedded in the structure of the Internet itself. The Internet is first and foremost designed to facilitate communication, and its openness and decentralized structure is paradoxical to the financial sector, which depends on the surety of a centralized authority overseeing all transactions. Most of our existing security issues on the internet are a consequence of these diametrically opposed philosophies.

Cryptocurrencies are the result of thinking about money with an Internet mindset. Classified as a virtual currency, cryptocurrencies such as Bitcoin aim to solve a number of problems present in our current online transactional system by embracing the decentralized structure of the Internet and by lifting some novel concepts from cryptography, the study of encryption and code-breaking.

Introduced in 2009, Bitcoin was the world’s first virtual currency. Bitcoin tackles the security issues of our current system by decentralizing its transaction data. Bitcoin’s public ledger is called a ‘blockchain,’ with each block in the chain representing a financial transaction. The database is designed to prevent data alteration by building references to other transactions into each record. To alter one record, a hacker would need to alter every other record that references it in order to avoid detection.

And since the database is maintained by every computer participating in that chain of transactions, any data altered on one computer would be immediately detected by every other computer on the network. This ‘decentralized data’ concept eliminates the big weakness in our current system. Today, the control of data is concentrated in a few centralized institutions, and if the security of any one of those institutions is penetrated, the entire system becomes compromised.

Beyond creating a secure financial transaction system for the World Wide Web, another goal of cryptocurrencies is to reduce or even eliminate financial fees by removing the need for a middleman overseeing the transaction. Since no centralized banking authority is necessary to track transactions, many of the costs associated with the involvement of banking institutions disappear. This has made Bitcoin the preferred currency for moving money around the world, as it can be done with a minimum of bureaucratic fees. Western Union currently charges 7-8 percent transfer cost per $100. For migrant workers sending money home to their families, that’s a big hit.

With no personal, identifying information recorded as part of a Bitcoin transaction, it provides a level of anonymity not possible with our current system. However, as pointed out by MIT researchers, this anonymity only extends as far as the merchant accepting the transaction, who may still tag transaction IDs with personal customer info.

The anonymous nature of Bitcoin transactions is a boon to the security of consumers, but it presents a real problem for law enforcement. Bitcoin has become the favored currency for criminal activity. Kidnappers frequently insist on payment in Bitcoin. The WannaCry virus that attacked 200,000 computers in 150 countries earlier this year required victims to pay in Bitcoin.

The value of Bitcoin has steadily increased since it was introduced almost 10 years ago. In January 2014, one bitcoin was worth $869.61. As I write this in October 2017, that same bitcoin is valued at $5,521.32, an increase of more than 500 percent in just three years. With approximately 16 million bitcoins in circulation, the total current value of the Bitcoin market is almost $92 billion. The smallest unit of Bitcoin is called a ‘satoshi,’ worth 1 millionth of a bitcoin.

WannaCry isn’t the only cyberthreat to leverage Bitcoin either. Since Bitcoin is designed to reward computers which keep its database updated with new bitcoins, some malicious programmers have created viruses that hijack your computer in order to force it to mine bitcoins. Most people are not even aware this has happened. There may simply be a process running in the background, slowing down your PC, and quietly depositing earned bitcoins into a hacker’s digital wallet.

The benefits to be gained by this revolution in commerce – security, anonymity, and the elimination of the need for a financial middleman – are great, but the risks are not to be dismissed either. Even as the anonymous nature of cryptocurrencies provide the consumer with greater security and lower costs, it creates a haven for criminals and makes it more difficult for law enforcement to track cybercrime.

Whether Bitcoin sticks around or disappears to be replaced with something else, the philosophy and technology behind it will transform the financial sector in the decades to come. Our current internet commerce model is a slapdash attempt to stick an old system onto the new digital world of the Internet and cannot last. The road to a new financial reality is bound to be a rocky one, as banking institutions are not likely to accept the changes – and the recession of their influence – easily. But, as shown by the recent Equifax hack, which exposed the personal information of 143 million Americans, maybe trusting our financial security to a few, centralized institutions isn’t such a great idea. And maybe cryptocurrencies are part of the answer.

TECH TALK: A.I. on the Road: Who’s Driving?


by Eric Austin
Computer Technical Advisor

In an automobile accident – in the moments before your car impacts an obstacle, in the seconds before glass shatters and steel crumples — we usually don’t have time to think, and are often haunted by self-recriminations in the days and weeks afterward. Why didn’t I turn? Why didn’t I hit the brakes sooner? Why’d I bother even getting out of bed this morning?

Driverless cars aim to solve this problem by replacing the human brain with a silicon chip. Computers think faster than we do and they are never flustered — unless that spinning beach ball is a digital sign of embarrassment? — but the move to put control of an automobile in the hands of a computer brings with it a new set of moral dilemmas.

Unlike your personal computer, a driverless car is a thinking machine. It must be capable of making moment-to-moment decisions that could have real life-or-death consequences.

Consider a simple moral quandary. Here’s the setup: It’s summer and you are driving down Lakeview Drive, headed toward the south end of China Lake. You pass China Elementary School. School is out of session so you don’t slow down, but you’ve forgotten about the Friend’s Camp, just beyond the curve, where there are often groups of children crossing the road, on their way to the lake on the other side. You round the curve and there they are, a whole gang of them, dressed in swim suits and clutching beach towels. You hit the brakes and are shocked when they don’t respond. You now have seven-tenths of a second to decide: do you drive straight ahead and strike the crossing kids or avoid them and dump your car in the ditch?

Not a difficult decision, you might think. Most of us would prefer a filthy fender to a bloody bumper. But what if instead of a ditch, it was a tree, and the collision killed everyone in the car? Do you still swerve to avoid the kids in the crosswalk and embrace an evergreen instead? What if your own children were in the car with you? Would you make the same decision?

If this little thought exercise made you queasy, that’s okay. Imagine how the programmers building the artificial intelligence (A.I.) that dictates the behavior of driverless cars must feel.

There may be a million to one chance of this happening to you, but with 253 million cars on the road, it will happen to someone. And in the near future, that someone might be a driverless car. Will the car’s A.I. remember where kids often cross? How will it choose one life over another in a zero-sum game?

When we are thrust into these life-or-death situations, we often don’t have time to think and react mostly by instinct. A driverless car has no instinct, but can process millions of decisions a second. It faces the contradictory expectations of being both predictable and capable of reacting to the unexpected.

That is why driverless cars were not possible before recent advances in artificial intelligence and computing power. Rather than traditionally linear, conditional-programming techniques of the past (eg: If This Then That), driverless cars employ a new field of computer science called “machine learning,” which utilizes more human-like functions, such as pattern-recognition, and can update its own code based on past results in order to attain better accuracy in the future. Basically, the developers give the A.I. a series of tests, and based on its success or failure in those tests, the A.I. updates its algorithms to improve its success rate.

That is what is happening right now in San Francisco, Boston, and soon New York. Las Vegas is testing a driverless bus system. These are opportunities for the driverless A.I. to encounter real-life situations and learn from those encounters before the technology is rolled out to the average consumer.

The only way we learn is from our mistakes. That is true of driverless cars, too, and they have made a few. There have been hardware and software failures and unforeseen errors. In February 2016, a Google driverless car experienced its first crash, turning into the path of a passing bus. In June 2016, a man in a self-driving Tesla was killed when the car tried to drive at full speed under a white tractor trailer crossing in front of it. The white trailer against the smoky backdrop of a cloudy sky fooled the car. The occupant was watching Harry Potter on the car’s television screen and never saw it coming.

Mistakes are ubiquitous in our lives; “human error” has become cliché. But will we be as forgiving of such mistakes when they are made by a machine? Life is an endless series of unfortunate coincidences, and no one can perfectly predict every situation. But, lest I sound like Dustin Hoffman in the film Rain Man, quoting plane crash statistics, let me say I am certain studies will eventually show autonomous vehicles reduce overall accident rates.

Also to be considered are the legal aspects. If a driverless car strikes a pedestrian, who is responsible? The owner of the driverless car? The car manufacturer? The developer of the artificial intelligence governing the car’s behavior? The people responsible for testing it?

We are in the century of A.I., and its first big win will be the self-driving car. The coming decade will be an interesting one to watch.

Get ready to have a new relationship with your automobile.

Eric can be emailed at

TECH TALK: The Equifax Hack – What you need to know


by Eric Austin
Computer Technical Advisor

Do you have a coin? Flip it. Tails, you are about to be the victim of identity theft. Heads, you’re safe — maybe. That’s the situation created by the recent Equifax data breach.

The hack exposed the personal information of 143 million Americans. That’s half of everyone in America. Names and addresses, Social Security numbers, birth dates, and even driver’s license numbers were stolen, as well as 209,000 credit card numbers.

“This is about as bad as it gets,” Pamela Dixon, executive director of the World Privacy Forum, a nonprofit research group, told the New York Times. “If you have a credit report, chances are you may be in this breach. The chances are much better than 50 percent.”

As a precaution, the widespread advice from financial advisers is to request a freeze of your credit from each of the three big credit reporting agencies: TransUnion, Experian and Equifax. Each freeze request will cost you $10 – although, after some seriously negative press, Equifax has decided to wave their fee until November 21.

The details of the hack and Equifax’s handling of it are also concerning. According to the Times, Equifax detected the breach in July, but didn’t warn consumers until September. It’s estimated hackers had access to Equifax data from mid-May until July 29, before the hack was finally discovered.

The New York Post first revealed the cause of the breach: a vulnerability in the software package Apache Struts, an open-source, web development framework used by many big financial institutions. The developer of the software discovered the vulnerability back in March, and issued a fix for the problem, but Equifax neglected to update their systems.

After the public announcement in September, Equifax set up a website,, where consumers can check to see if they are among those affected. According to the company, at the site you can “determine if your information was potentially impacted by this incident.”

You can also sign up for a free year of identity protection through their service, TrustedID. Initially, Equifax received some backlash when it was discovered that consumers signing up for the program were forced to agree to a “terms of service” that waived their rights to sue for damages. The language has since been altered, and Equifax recently released a statement insisting that using the service will not require individuals to give up any of their rights to participate in a class-action lawsuit.

Other troubling reports have come to light as well. The day after Equifax discovered the data breach – but over a month before it was disclosed to the public – three Equifax executives, including the company’s chief financial officer, unloaded nearly two million in corporate stock. The company’s stock value has fallen more than 35 percent in the days since, and Congress is calling for an investigation into possible insider trading.

Equifax’s recent activities in Washington have only added to the bad press. In the months leading up to the hack, Equifax was busy lobbying Washington to relax the regulations and safeguards on the credit reporting industry. According to The Philadelphia Inquirer, the company spent more than $500,000 seeking to influence lawmakers on issues such as “data security and breach notification” and “cybersecurity threat information sharing” in the first six months of 2017.

This includes an effort to repeal federal regulations upholding a consumer’s right to sue credit reporting companies. In July, as reported by the Consumerist, an arm of Consumer Reports, Congress passed the Congressional Review Act in a slim, party-line vote. If upheld by the Senate and signed by the President, the resolution would overturn certain rules created by the Consumer Financial Protection Bureau to regulate the financial industry. This agency was set up as a safeguard for consumers after the financial crash of 2007-08. Among the rules under danger of repeal are measures meant to protect consumers by “curbing the use of ‘forced arbitration’ in many consumers’ financial contracts.”

And Equifax is likely to profit from this act of negligence, as it fuels existing paranoia about online privacy and will inspire millions to spend money on the pseudo-security of identity protection services, including Equifax’s own TrustedID.

The fallout from this hack is still being assessed, and likely won’t be fully known for years, if ever. This is the Deepwater Horizon of data breaches, and it should serve as a similar wake-up call for consumers.

We need a higher standard of accountability in the financial industry. These institutions no longer simply protect our money. Now they guard our very identities. Their servers should be as secure as their bank vaults. Money is replaceable, but most of us have only the one identity.

TECH TALK: Welcome to the world of Big Data


by Eric Austin
Computer Technical Advisor


What exactly is Big Data? Forbes defines it as “the exponential explosion in the amount of data we have generated since the dawn of the digital age.”

Harvard researchers, Erez Aiden and Jean-Baptiste Michel, explore this phenomenon in their book, Uncharted: Big Data as a Lens on Human Culture. They note, “If we write a book, Google scans it; if we take a photo, Flickr stores it; if we make a movie, YouTube streams it.”

And Big Data is more than just user created content from the digital era. It also includes previously published books that are now newly-digitized and available for analysis.

Together with Google, Aiden and Michel have created the Google Ngram Viewer, a free online tool allowing anyone to search for n-grams, or linguistic phrases, in published works and plot their occurrence over time.

Since 2004 Google has been scanning the world’s books and storing their full text in a database. To date, they have scanned 15 million of the 129 million books published between 1500 and 2008. From this database, researchers created a table of two billion phrases, or n-grams, which can be analyzed by the year of the publication of the book in which they appear. Such analysis can provide insight into the evolution of language and culture over many generations.

As an example, the researchers investigated the phrase “the United States are” versus “the United States is.” When did we start referring to the United States as a singular entity, rather than a group of individual states? Most linguists think this change occurred after the Civil War in 1865, but from careful analysis with the Google Ngram Viewer, it is clear this didn’t take off until a generation later in the 1880s.

Author Seth Stephens-Davidowitz thinks the internet has an even greater resource for understanding human behavior: Google searches. Whenever we do a search on Google, our query is stored in a database. That database of search queries is itself searchable using the online tool Google Trends. Stephens-Davidowitz found this data so interesting he wrote his dissertation on it, and now has written a book: Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are.

Google Trends doesn’t just tell us what people are searching for on the internet, it also tells us where those people live, how old they are, and what their occupation is. Clever analysts can cross-index this data to tell us some interesting facts about ourselves. Stephens-Davidowitz argues this data is even more accurate than surveys because people lie to other people, but not to Google.

In his book, Everybody Lies, Stephens-Davidowitz reports that on the night of Obama’s election in 2008, one out of a hundred Google searches containing the word “Obama” also contained the word “nigger” or “KKK.” But who was making those searches? Are Republicans more racist than Democrats? Not according to the data. Stephens-Davidowitz says there were a similar number of these type of searches in Democratically dominant areas of the country as in Replublican ones. The real divide is not North/South or Democrat/Republican, he asserts, but East/West, with a sharp drop off in states west of the Mississippi River.

Stephens-Davidowitz even suggests Google Trends can offer a more accurate way of predicting vote outcomes than exit polling. By looking at searches containing the names of both candidates in the 2016 election, he found that the order in which the names appear in a search may demonstrate voter preference. In key swing states, there were a greater number of searches for “Trump Clinton” versus “Clinton Trump,” indicating a general movement toward the Republican candidate. This contradicted much of the polling data at the time, but turned out to be a more accurate barometer of candidate preference.

The world of Big Data is huge and growing larger every day. Researchers and scientists are finding new and better ways of analyzing it to tell us more about the most devious creatures on this planet. Us.

But we must be careful of the seductive lure of Big Data, and we should remember the words immortalized by Mark Twain: “There are three kinds of lies: lies, damn lies, and statistics.”

TECH TALK: Internet “outing” – social conscience or vigilante justice?


by Eric Austin
Computer Technical Advisor

A couple of weeks ago, a violent clash broke out between protesters and counter-protesters in Charlottesville, Virginia. The violence occurred at a rally organized by white nationalists, angry at the imminent removal of a memorial for Confederate General Robert E. Lee.

I was home and watching it unfold as it happened. It was chilling to see footage of hundreds of men marching six abreast, torches held high and chanting “Blood and soil!” and “Jews will not replace us!”

Later in the day, reports came in that one of the white nationalists had rammed his car into a crowd of counter-protesters, killing a young woman and injuring many more. The moment was captured on video and played ad nauseum in the news media.

An observant twitter user noted the major difference between racists of the past and those marching in Charlottesville: they no longer bothered with the iconic white robes and conical hoods. Their faces were plain to see.

Instead of a few grainy pictures on the front page of the Evening Post, thousands of photos and live video got posted to the internet.

The following day a tweet popped up in my twitter feed. It was an appeal for help in identifying individuals from the photos and video that had been circulating the internet and cable news channels. Full of righteous indignation, I liked and retweeted it.

Most of us have online profiles available for public view with our real names attached to a photo, and often to a place of employment or school, or even to the names of other people we know. On sites like Facebook, LinkedIn or Instagram. Also in less obvious places like school alumni pages and business websites that list employees. Even our Amazon profiles have information about us. We leave our digital fingerprints everywhere.

On Monday, reports continued to pour in. One of the white nationalists had been identified and his employer began receiving complaining calls. He was fired.

Another young man’s family, after he was outed on twitter, publicly disowned him in a letter sent to their local paper – which was then broadcast worldwide on the web. His nephew gave interviews to the press. “Our relatives were calling us in a panic earlier today,” he said, “demanding we delete all Facebook photos that connect us to them.”

This is all for the best, I thought to myself. Racism is wrong. White nationalism is destructive. Surely, the best way of dealing with such views is to shine a light on them.

The practice of publishing identifying information to the internet, called “doxing,” has grown over recent years. It appears in forms both arguably beneficial (exposure of government or corporate corruption) and utterly malicious (revenge porn).

Within days, the New York Times was reporting on one poor man in Arkansas, who had been misidentified by over-zealous internet sleuths. His inbox quickly filled with messages of vulgarity and hate. Ironically, this was in reaction to similar sentiments displayed in Charlottesville just a few days earlier.

I have always found myself coming down on the side of Benjamin Franklin, who said, “It is better 100 guilty persons should escape [justice] than that one innocent person should suffer.”

It’s a maxim Franklin applied to our criminal justice system, but I think it’s relevant here.

If you attend a neo-Nazi rally and decide not to bring your pointy hood, you risk family and friends seeing your face plastered all over the news.

But let’s not allow the internet’s version of mob mentality to dictate the rules for our society.

There is a reason John Adams insisted “we are a nation of laws, not of men.” There is a reason our Founding Fathers chose to make this nation a constitutional republic instead of one ruled only by the majority.

The internet is a powerful tool, but one better used to facilitate dialogue with others, and not as a weapon to bludgeon them. The internet may be a billion voices, but it can also wear a billion boots. Let’s not trample the innocent in our mad rush to condemn the justifiably horrific.

If you’d like to be my third follower on twitter, you can find me @realEricAustin or email me at

TECH TALK: How technology could save our Republic

Gerry-mandering explained. (image credit: Washington Post)


by Eric W. Austin
Computer Technical Advisor

Elbridge Gerry, second-term governor of Massachusetts, is about to do something that will make his name both legendary and despised in American partisan politics. It’s two months after Christmas, in a cold winter in the year 1812.

Typical of a politician, the next election is forefront in his mind. And Gerry has reason to worry.

Elections in those days were a yearly affair. Between 1800 – 1803 Gerry had lost four elections in a row to Federalist Caleb Strong. He didn’t dare run again until after Strong’s retirement in 1807. Three years later, though, Elbridge Gerry gathered his courage and tried again.

This time he won.

Gerry was a Democratic-Republican, but during his first term the Federalists had control of the Massachusetts legislature, and he gained a reputation for championing moderation and rational discourse.

However, in the next election cycle his party gained control of the Senate and things changed. Gerry became much more partisan, purging many of the Federalist appointees from the previous cycle and enacting so-called “reforms,” increasing the number of judicial appointments, which he then filled with Republican flunkies.

The irony of this is that Gerry had been a prominent figure in the U.S. Constitutional Convention in the summer of 1787, where he was a vocal advocate of small government and individual liberties. He even angrily quit the Massachusetts’ ratifying convention in 1788 after getting into a shouting match with convention chair, Francis Dana, primarily over the lack of a Bill of Rights. (The first 10 amendments later became the Bill of Rights.)

But none of this is what Elbridge Gerry is remembered for.

That came in the winter of 1812 when he signed into law a bill redrawing voting districts in such a way that it gave electoral advantage to his own Democratic-Republican party.

Political cartoon from the early 1800’s.

The move was highly successful from a political standpoint, but unpopular. In the next election, Gerry’s Democratic-Republican party won all but 11 seats in a State Senate that had – only the year before – been controlled by the Federalists. This, despite losing a majority of the seats in the House by a wide margin, and the governorship as well as: his old Federalist nemesis, Caleb Strong, came out of retirement to defeat him.

According to a legendary account from the period, someone posted a map of the newly-drawn districts on the wall in the offices of the Boston Gazette. One of the editors pointed to the district of Essex and remarked that its odd shape resembled a salamander. Another editor exclaimed, “A salamander? Call it a Gerry-mander!”

Thus the first “Gerry-mander” was born.

Today the process of redrawing district boundaries in such a way as to favor one party over another is referred to as “gerrymandering.”

It is mandated in the Constitution that states be divided into districts of equal population. So, every ten years when a census is taken, states redraw voting districts based on population changes. In many states, the party that controls the state legislature at the time also dictates this. Predictably, gerrymandering is most dominant in these states.

By strategically drawing the district lines to give the ruling party election-advantage, that party can maintain their legislative power even if the majority of the population moves away from them in the following years.

According to a 2014 study conducted by The Washington Post, Republicans are currently responsible for drawing eight out of ten of the most gerrymandered districts in the U.S. This has resulted in the Democrats being under-represented by about 18 seats in the U.S. House of Representatives “relative to their vote share in the 2012 election.”

The most gerrymandered districts in the United States. (image credit: Washington Post)

Maine is one of the few states that has given this decision to an independent, bipartisan commission instead. That commission then sends a proposal for approval to the state legislature. Of course, we have it a bit easier, with only two districts to worry about.

For much of the nation, gerrymandering is still one of the most prevalent and democratically destructive practices in politics today.

It’s also notoriously difficult to eradicate.

The problem is that someone has to decide on the districts. And everyone is biased.

Even in the few cases where legal action has been brought against an instance of partisan gerrymandering, how does one prove that bias in a court of law? The quandary is this: in order to prove a district was drawn with bias intent, one must first provide an example of how the district would look if drawn without bias. But since all districts are drawn by people, there is no such example to use.

Because of this difficulty, in 2004 the Supreme Court ruled that such a determination constitutes an “unanswerable question.”

But that may be about to change.

There is currently a major redistricting case before the Supreme Court. Professor Steve Vladeck, of Texas University’s School of Law, calls it “the biggest and most important election law case in decades.” It involves a gerrymandered district in Wisconsin.

The reason the courts are now taking these cases more seriously is because of recent advances in computer-powered analytics: technology may finally provide that elusive example of an unbiased district.

This week, August 7-11, a team of mathematicians at Tufts University is holding a conference on the “Geometry of Redistricting” to look at this very problem.

A number of mathematical algorithms have already been proposed to remove the human-factor from the process of redistricting.

Brian Olson, a software engineer from Massachusetts, has developed an algorithm which draws districts based on census blocks. His approach aims to make districts as compact as possible while maintaining neighborhood integrity.

The debate is still going on about which factors are most essential to a redistricting algorithm, but eventually one method will become standard and the days of gerrymandering will be over.

Poor Elbridge Gerry. After losing the Massachusetts governorship, he became vice president under James Madison and then died in office, becoming the only signer of the Declaration of Independence to be buried in America’s capital. But he’s mostly remembered for the despised political practice that bears his name. Hopefully, soon even that will be forgotten.

Good riddance Elbridge Gerry, I say. Good riddance, sir!

The difference a computer makes: The top image shows the districts of North Carolina as they are drawn today. The bottom image are districts drawn by an unbiased computer algorithm. Which looks more fair to you? (image credit: Washington Post)

TECH TALK: The Internet – At War with Itself


by Eric Austin
Computer Technical Advisor

There’s a war going on, although you might not be aware of it. It’s a war between the almighty dollar and the information superhighway.

I began my career in the early ‘90s, just as the internet-fueled tech boom was taking off. I’ve watched the internet grow from a tiny seed in the mind of Al Gore (ha ha) to the social and economic juggernaut that it is today.

But even from its very inception there were two competing ideas fighting to shape its future. One was an outgrowth of a cultural groupthink: the “hippie” movement of the internet, if you will. It’s an apt comparison, as the philosophy it inspired hearkens back to that optimistic era of peace and love.

This group believed the internet was a chance for humans to reinvent themselves. To escape the shackles of corporatism and Gordon Gekko-greed that had defined the previous decade of the 1980s.

The phrase “information wants to be free” defined this school of thought.

The “open-source” software movement, based on the idea of collaborative genius — that a group of unfettered minds could create something greater than any of its individual parts — gave birth to the Linux operating system, Firefox browser, VLC Media Player, GIMP and many other software programs. Each of us benefits from this movement whenever we download free software distributed under the GNU General Public Software License. And while it’s still only a sliver of the desktop market in comparison to Microsoft Windows, Linux dominates on mobile devices (56 percent) and powers more than 40 percent of the world’s web servers.

You can see the influence of this collaborative philosophy everywhere on the internet, and the world wide web is a better place because of it.

But there is another entity on the internet. A menacing, dark presence that wants to swallow up the hope and optimism of the free information movement. This force seeks to monetize and control the avenues of free access which the internet currently fosters. Rather than bettering society through collaborative social effort, this capitalist creature wants to conquer in the name of cold hard cash. It wants to turn the internet superhighway into a toll road.

This shadow over the internet is cast by ISPs, digital distribution giants and communication companies seeking to cement their dominance over their respective consumer markets.

The debate over Net Neutrality is the most recent battle to be waged in the war of $$ vs WWW. It promises to provide greater stability, consistency and service, but takes away freedom, ingenuity and the unexpected.

I’m here to tell you this is a war we need. It’s one of the good wars. This struggle is what keeps corporate greed on its toes. It leaves room for small start-ups to make an unexpected splash, and keeps established familiars from becoming complacent – yet provides the structure and efficiency that stimulates growth.

Without one we wouldn’t have great services like Netflix and Amazon. But without the other, great services like Netflix and Amazon never would have gotten the chance.

Net Neutrality must be retained because it levels the playing field. It doesn’t prevent bullies on the playground, but it makes sure everyone has a fighting chance.

Support Net Neutrality, not because it’s the right thing to do — even though it is. Support it because without the conflict it creates we wouldn’t have the dynamic technical environment that we’ve enjoyed for the last 20 years.

This is one time when conflict is good. Besides, it frustrates the corporate overlords.

Good. Keep them frustrated.

Get involved! Visit and join almost 11 million other Americans who have left comments with the FCC in support of Net Neutrality.

Further reading:

TECH TALK: Welcome to Reality 2.0


by Eric Austin
Computer Technical Advisor

Let me take you back a few decades to the 1980s. I was 12 years old and cruising around the neighborhood on my ten-speed mountain bike. On this particular day, I was exploring the garage sales along Lakeview Drive that are so prevalent this time of year.

At one of them, I found an old video game console for 75 cents and eagerly trundled it atop my bike for the trip home. It was one of those all-in-one units with the games built into it, and two controllers, then called “paddles,” with only a simple knob like a control switch for a dimmable light.

The first videogame: Pong.

All the games included were variations on Pong, in which each player controls a short, vertical line on opposite sides of the screen, moved up or down by the control knob on the game paddle. The objective of the game is to “bounce” a little white dot from one side of the screen to the other in order to score points against your opponent.

Nobody looking at a screenshot of this game would mistake it for an actual game of tennis.

Skip forward to the present day. Steam, the largest digital distribution platform on the web, has their Summer Sale and I pick up the game Grand Theft Auto 5 for 20 bucks.

GTA5 is one of the biggest videogame releases in recent years, with over 11 million copies sold within 24-hours of its debut. Basically, it’s a crime story told in a simulated world based on the Southern California city of Los Angeles and the surrounding countryside.

Consider just a few mind-blowing facts about the world of GTA5: The game world encompasses more than 100 square miles! You can fly a plane, ride a motorcycle, or go scuba-diving off the coast of California. If you stop your car in the middle of traffic, drivers around you will beep their horns and flip you the bird until you get moving again. If you make your character act crazy in the game, passers-by will pull out their phones and film you — just like real life!

I’m only 40 (okay forty-two!), but I’ve watched as our ability to simulate real life has gone from Pong, a rudimentary effort to simulate the game of tennis, to Grand Theft Auto 5, an incredibly detailed simulation of an entire city, down to building interiors, wildlife in the countryside, and artificial intelligence-driven people that react to your actions on the fly.

Grand Theft Auto 5: An entire simulated city.

Considering this kind of advancement just in my short life, what kind of worlds will we be able to simulate in another 50 years? If the past is anything to go by, computer simulations of the future will be so real that they will be indistinguishable from actual reality. Already it is difficult to watch a movie today and know which parts of it are real and which are computer generated. Combine this graphics realism with advances in computing power and artificial intelligence and it is not difficult to imagine what videogames of the future might be like.

This kind of thinking has led a number of brilliant minds, as diverse as entrepreneur Elon Musk and astrophysicist Neil deGrasse Tyson, to ask: Are we already living in a simulated world? Would we be able to tell if we were?

The argument goes something like this:

We can assume that, in the future, it will be possible to simulate reality to the extent that it is impossible to distinguish it from the real thing. Further, it is an obvious assumption that there will be a greater number of simulated worlds than actual worlds. One can then also assume that some of those simulated worlds would be simulations of the past, such as earth in the year 2017. And since there is only one actual Earth 2017, but many possible simulations of Earth 2017, it therefore is more likely we are living in a simulation than not. For example, if there are a billion simulated versions of Earth 2017, but only one actual Earth 2017, the odds that we are living in the real world and not a simulated one would be a billion to one against.

Consider something even weirder. In a video game-simulated world, your computer only renders the part of the virtual world you are currently experiencing. So, when you are looking in a specific direction in the game world, your computer renders the graphics for the part of the world you are seeing, but not for anything that is currently off-screen. It does this to save processing power.

Well, the “real” world eerily works in a very similar way. According to the Copenhagen interpretation of quantum mechanics, “physical systems generally do not have definite properties prior to being measured” (Wiki, 2017). In other words, quantum particles do not exist in a specific place and time until they are interacted with – something termed in physics as the “(probability) wave function collapse,” in which all possible values (of location, of momentum, etc…) collapse to a single value at the moment of interaction. It is almost as if the universe is a quantum computer which saves processing power by not calculating exact values for reality until it becomes necessary by interaction with an observer. Weird, huh?

Is it possible that we are unwitting inhabitants of an enormous simulation powered by a quantum computer existing sometime in the future?

Are your neighbors simply advanced A.I. personalities designed to give this simulation a veneer of realism? Could we all simply be self-aware A.I. placed into a simulation of earth in the year 2017 and programmed to believe this is not a simulation?

Of course, would I be able to ask these questions if we were?

Do you know someone living in their own simulation of reality? Come share your experience on or send an email to me at!

TECH TALK: The importance of backing up your computer


by Eric Austin
Computer Technical Advisor

This past weekend I was the unfortunate victim of a hard drive crash. I have multiple drives installed in my computer, and this was my main Windows system drive. Even more infuriatingly, the drive was less than a year old.

It took me two days to diagnose the problem, pull out the bad drive and install a new one. And it got me thinking about how important backing up your data can be! Here are a few best practices to keep in mind.

pc computer hard drive crash

Don’t let this happen to you!

Consider using a separate drive for your data.

You’ll want to install your operating system (OS) to the fastest drive attached to your computer, which is typically your internal hard drive, so use this drive to install programs or games. But since this is also the drive that is used most often, writing and reading as your system runs, it’s therefore the drive most likely to fail.

So use another physical drive to store your personal data (e.g. pictures, documents, etc…). The simplest solution for this is to invest in a flash drive that can be plugged into a spare USB port. A 64 GB flash drive is currently available on Amazon for only $15.99. The advantage to this is how easy it is to unplug the drive and take your data with you as the need arises.

Luckily, I followed this advice myself and didn’t lose any significant data when my system drive crashed.

You might also consider cloud solutions to back up your data. Most cloud storage solutions like Dropbox, Apple’s iCloud or Microsoft’s OneDrive, allow you to set up automatic syncing so that certain folders on your hard-drive are always synced with a copy of your data stored in the cloud. Although all of these cloud solutions have free options, you’ll likely need to pay a subscription if you want to store a large amount of data.

There are a number of good automated back-up systems available, including Apple’s excellent Time Machine utility that comes packaged in OS X, or Windows Backup and Restore tool. Most of these solutions require an external drive dedicated to backing up (and can’t be used for anything else). But with the cheap availability of hard drives, especially flash drives, this is certainly an option you should look into if you don’t want to mess with manually copying the data yourself.

Another option is to invest in a Blu-ray drive that lets you back-up to a Blu-ray disc which can hold up to 47 gigabytes. This is a good option if you want a portable back-up that can be stored off-site.

Whichever solution you choose, build in some redundancy. This means that if you back up your data every month to one external drive, then back it up every six months to a different drive, so that when your first back-up fails (and it will), you won’t be completely SOL. Even better, take that second back-up and store it in a separate location from the first, like a safety deposit box or a friend’s house. This is so that if your house burns down or is burgled (God forbid!) you’ll have another back-up to (pardon the pun) fall back on.

Ransomeware screenshot (image source: The New York Times)

A hard drive crash or natural disaster isn’t the only reason to make sure you always have a recent back-up of your data. WannaCry is a computer virus that hit the entire planet earlier this year. It’s a particular kind of virus called “Ransomware” that invades your computer, encrypts all of your data (making it inaccessible to you), and then shows you a screen demanding a wire transfer of $2,000 or it will delete your data.

A lot of people paid that ransom because they didn’t have a recent back-up of their data.

Don’t wait till it happens to you. Start backing up your data today!

Have a question or idea for a column? Send me an email at or leave a comment on!