TECH TALK: A.I. on the Road: Who’s Driving?


by Eric Austin
Computer Technical Advisor

In an automobile accident – in the moments before your car impacts an obstacle, in the seconds before glass shatters and steel crumples — we usually don’t have time to think, and are often haunted by self-recriminations in the days and weeks afterward. Why didn’t I turn? Why didn’t I hit the brakes sooner? Why’d I bother even getting out of bed this morning?

Driverless cars aim to solve this problem by replacing the human brain with a silicon chip. Computers think faster than we do and they are never flustered — unless that spinning beach ball is a digital sign of embarrassment? — but the move to put control of an automobile in the hands of a computer brings with it a new set of moral dilemmas.

Unlike your personal computer, a driverless car is a thinking machine. It must be capable of making moment-to-moment decisions that could have real life-or-death consequences.

Consider a simple moral quandary. Here’s the setup: It’s summer and you are driving down Lakeview Drive, headed toward the south end of China Lake. You pass China Elementary School. School is out of session so you don’t slow down, but you’ve forgotten about the Friend’s Camp, just beyond the curve, where there are often groups of children crossing the road, on their way to the lake on the other side. You round the curve and there they are, a whole gang of them, dressed in swim suits and clutching beach towels. You hit the brakes and are shocked when they don’t respond. You now have seven-tenths of a second to decide: do you drive straight ahead and strike the crossing kids or avoid them and dump your car in the ditch?

Not a difficult decision, you might think. Most of us would prefer a filthy fender to a bloody bumper. But what if instead of a ditch, it was a tree, and the collision killed everyone in the car? Do you still swerve to avoid the kids in the crosswalk and embrace an evergreen instead? What if your own children were in the car with you? Would you make the same decision?

If this little thought exercise made you queasy, that’s okay. Imagine how the programmers building the artificial intelligence (A.I.) that dictates the behavior of driverless cars must feel.

There may be a million to one chance of this happening to you, but with 253 million cars on the road, it will happen to someone. And in the near future, that someone might be a driverless car. Will the car’s A.I. remember where kids often cross? How will it choose one life over another in a zero-sum game?

When we are thrust into these life-or-death situations, we often don’t have time to think and react mostly by instinct. A driverless car has no instinct, but can process millions of decisions a second. It faces the contradictory expectations of being both predictable and capable of reacting to the unexpected.

That is why driverless cars were not possible before recent advances in artificial intelligence and computing power. Rather than traditionally linear, conditional-programming techniques of the past (eg: If This Then That), driverless cars employ a new field of computer science called “machine learning,” which utilizes more human-like functions, such as pattern-recognition, and can update its own code based on past results in order to attain better accuracy in the future. Basically, the developers give the A.I. a series of tests, and based on its success or failure in those tests, the A.I. updates its algorithms to improve its success rate.

That is what is happening right now in San Francisco, Boston, and soon New York. Las Vegas is testing a driverless bus system. These are opportunities for the driverless A.I. to encounter real-life situations and learn from those encounters before the technology is rolled out to the average consumer.

The only way we learn is from our mistakes. That is true of driverless cars, too, and they have made a few. There have been hardware and software failures and unforeseen errors. In February 2016, a Google driverless car experienced its first crash, turning into the path of a passing bus. In June 2016, a man in a self-driving Tesla was killed when the car tried to drive at full speed under a white tractor trailer crossing in front of it. The white trailer against the smoky backdrop of a cloudy sky fooled the car. The occupant was watching Harry Potter on the car’s television screen and never saw it coming.

Mistakes are ubiquitous in our lives; “human error” has become cliché. But will we be as forgiving of such mistakes when they are made by a machine? Life is an endless series of unfortunate coincidences, and no one can perfectly predict every situation. But, lest I sound like Dustin Hoffman in the film Rain Man, quoting plane crash statistics, let me say I am certain studies will eventually show autonomous vehicles reduce overall accident rates.

Also to be considered are the legal aspects. If a driverless car strikes a pedestrian, who is responsible? The owner of the driverless car? The car manufacturer? The developer of the artificial intelligence governing the car’s behavior? The people responsible for testing it?

We are in the century of A.I., and its first big win will be the self-driving car. The coming decade will be an interesting one to watch.

Get ready to have a new relationship with your automobile.

Eric can be emailed at

TECH TALK: The Equifax Hack – What you need to know


by Eric Austin
Computer Technical Advisor

Do you have a coin? Flip it. Tails, you are about to be the victim of identity theft. Heads, you’re safe — maybe. That’s the situation created by the recent Equifax data breach.

The hack exposed the personal information of 143 million Americans. That’s half of everyone in America. Names and addresses, Social Security numbers, birth dates, and even driver’s license numbers were stolen, as well as 209,000 credit card numbers.

“This is about as bad as it gets,” Pamela Dixon, executive director of the World Privacy Forum, a nonprofit research group, told the New York Times. “If you have a credit report, chances are you may be in this breach. The chances are much better than 50 percent.”

As a precaution, the widespread advice from financial advisers is to request a freeze of your credit from each of the three big credit reporting agencies: TransUnion, Experian and Equifax. Each freeze request will cost you $10 – although, after some seriously negative press, Equifax has decided to wave their fee until November 21.

The details of the hack and Equifax’s handling of it are also concerning. According to the Times, Equifax detected the breach in July, but didn’t warn consumers until September. It’s estimated hackers had access to Equifax data from mid-May until July 29, before the hack was finally discovered.

The New York Post first revealed the cause of the breach: a vulnerability in the software package Apache Struts, an open-source, web development framework used by many big financial institutions. The developer of the software discovered the vulnerability back in March, and issued a fix for the problem, but Equifax neglected to update their systems.

After the public announcement in September, Equifax set up a website,, where consumers can check to see if they are among those affected. According to the company, at the site you can “determine if your information was potentially impacted by this incident.”

You can also sign up for a free year of identity protection through their service, TrustedID. Initially, Equifax received some backlash when it was discovered that consumers signing up for the program were forced to agree to a “terms of service” that waived their rights to sue for damages. The language has since been altered, and Equifax recently released a statement insisting that using the service will not require individuals to give up any of their rights to participate in a class-action lawsuit.

Other troubling reports have come to light as well. The day after Equifax discovered the data breach – but over a month before it was disclosed to the public – three Equifax executives, including the company’s chief financial officer, unloaded nearly two million in corporate stock. The company’s stock value has fallen more than 35 percent in the days since, and Congress is calling for an investigation into possible insider trading.

Equifax’s recent activities in Washington have only added to the bad press. In the months leading up to the hack, Equifax was busy lobbying Washington to relax the regulations and safeguards on the credit reporting industry. According to The Philadelphia Inquirer, the company spent more than $500,000 seeking to influence lawmakers on issues such as “data security and breach notification” and “cybersecurity threat information sharing” in the first six months of 2017.

This includes an effort to repeal federal regulations upholding a consumer’s right to sue credit reporting companies. In July, as reported by the Consumerist, an arm of Consumer Reports, Congress passed the Congressional Review Act in a slim, party-line vote. If upheld by the Senate and signed by the President, the resolution would overturn certain rules created by the Consumer Financial Protection Bureau to regulate the financial industry. This agency was set up as a safeguard for consumers after the financial crash of 2007-08. Among the rules under danger of repeal are measures meant to protect consumers by “curbing the use of ‘forced arbitration’ in many consumers’ financial contracts.”

And Equifax is likely to profit from this act of negligence, as it fuels existing paranoia about online privacy and will inspire millions to spend money on the pseudo-security of identity protection services, including Equifax’s own TrustedID.

The fallout from this hack is still being assessed, and likely won’t be fully known for years, if ever. This is the Deepwater Horizon of data breaches, and it should serve as a similar wake-up call for consumers.

We need a higher standard of accountability in the financial industry. These institutions no longer simply protect our money. Now they guard our very identities. Their servers should be as secure as their bank vaults. Money is replaceable, but most of us have only the one identity.

TECH TALK: Welcome to the world of Big Data


by Eric Austin
Computer Technical Advisor


What exactly is Big Data? Forbes defines it as “the exponential explosion in the amount of data we have generated since the dawn of the digital age.”

Harvard researchers, Erez Aiden and Jean-Baptiste Michel, explore this phenomenon in their book, Uncharted: Big Data as a Lens on Human Culture. They note, “If we write a book, Google scans it; if we take a photo, Flickr stores it; if we make a movie, YouTube streams it.”

And Big Data is more than just user created content from the digital era. It also includes previously published books that are now newly-digitized and available for analysis.

Together with Google, Aiden and Michel have created the Google Ngram Viewer, a free online tool allowing anyone to search for n-grams, or linguistic phrases, in published works and plot their occurrence over time.

Since 2004 Google has been scanning the world’s books and storing their full text in a database. To date, they have scanned 15 million of the 129 million books published between 1500 and 2008. From this database, researchers created a table of two billion phrases, or n-grams, which can be analyzed by the year of the publication of the book in which they appear. Such analysis can provide insight into the evolution of language and culture over many generations.

As an example, the researchers investigated the phrase “the United States are” versus “the United States is.” When did we start referring to the United States as a singular entity, rather than a group of individual states? Most linguists think this change occurred after the Civil War in 1865, but from careful analysis with the Google Ngram Viewer, it is clear this didn’t take off until a generation later in the 1880s.

Author Seth Stephens-Davidowitz thinks the internet has an even greater resource for understanding human behavior: Google searches. Whenever we do a search on Google, our query is stored in a database. That database of search queries is itself searchable using the online tool Google Trends. Stephens-Davidowitz found this data so interesting he wrote his dissertation on it, and now has written a book: Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are.

Google Trends doesn’t just tell us what people are searching for on the internet, it also tells us where those people live, how old they are, and what their occupation is. Clever analysts can cross-index this data to tell us some interesting facts about ourselves. Stephens-Davidowitz argues this data is even more accurate than surveys because people lie to other people, but not to Google.

In his book, Everybody Lies, Stephens-Davidowitz reports that on the night of Obama’s election in 2008, one out of a hundred Google searches containing the word “Obama” also contained the word “nigger” or “KKK.” But who was making those searches? Are Republicans more racist than Democrats? Not according to the data. Stephens-Davidowitz says there were a similar number of these type of searches in Democratically dominant areas of the country as in Replublican ones. The real divide is not North/South or Democrat/Republican, he asserts, but East/West, with a sharp drop off in states west of the Mississippi River.

Stephens-Davidowitz even suggests Google Trends can offer a more accurate way of predicting vote outcomes than exit polling. By looking at searches containing the names of both candidates in the 2016 election, he found that the order in which the names appear in a search may demonstrate voter preference. In key swing states, there were a greater number of searches for “Trump Clinton” versus “Clinton Trump,” indicating a general movement toward the Republican candidate. This contradicted much of the polling data at the time, but turned out to be a more accurate barometer of candidate preference.

The world of Big Data is huge and growing larger every day. Researchers and scientists are finding new and better ways of analyzing it to tell us more about the most devious creatures on this planet. Us.

But we must be careful of the seductive lure of Big Data, and we should remember the words immortalized by Mark Twain: “There are three kinds of lies: lies, damn lies, and statistics.”

TECH TALK: Internet “outing” – social conscience or vigilante justice?


by Eric Austin
Computer Technical Advisor

A couple of weeks ago, a violent clash broke out between protesters and counter-protesters in Charlottesville, Virginia. The violence occurred at a rally organized by white nationalists, angry at the imminent removal of a memorial for Confederate General Robert E. Lee.

I was home and watching it unfold as it happened. It was chilling to see footage of hundreds of men marching six abreast, torches held high and chanting “Blood and soil!” and “Jews will not replace us!”

Later in the day, reports came in that one of the white nationalists had rammed his car into a crowd of counter-protesters, killing a young woman and injuring many more. The moment was captured on video and played ad nauseum in the news media.

An observant twitter user noted the major difference between racists of the past and those marching in Charlottesville: they no longer bothered with the iconic white robes and conical hoods. Their faces were plain to see.

Instead of a few grainy pictures on the front page of the Evening Post, thousands of photos and live video got posted to the internet.

The following day a tweet popped up in my twitter feed. It was an appeal for help in identifying individuals from the photos and video that had been circulating the internet and cable news channels. Full of righteous indignation, I liked and retweeted it.

Most of us have online profiles available for public view with our real names attached to a photo, and often to a place of employment or school, or even to the names of other people we know. On sites like Facebook, LinkedIn or Instagram. Also in less obvious places like school alumni pages and business websites that list employees. Even our Amazon profiles have information about us. We leave our digital fingerprints everywhere.

On Monday, reports continued to pour in. One of the white nationalists had been identified and his employer began receiving complaining calls. He was fired.

Another young man’s family, after he was outed on twitter, publicly disowned him in a letter sent to their local paper – which was then broadcast worldwide on the web. His nephew gave interviews to the press. “Our relatives were calling us in a panic earlier today,” he said, “demanding we delete all Facebook photos that connect us to them.”

This is all for the best, I thought to myself. Racism is wrong. White nationalism is destructive. Surely, the best way of dealing with such views is to shine a light on them.

The practice of publishing identifying information to the internet, called “doxing,” has grown over recent years. It appears in forms both arguably beneficial (exposure of government or corporate corruption) and utterly malicious (revenge porn).

Within days, the New York Times was reporting on one poor man in Arkansas, who had been misidentified by over-zealous internet sleuths. His inbox quickly filled with messages of vulgarity and hate. Ironically, this was in reaction to similar sentiments displayed in Charlottesville just a few days earlier.

I have always found myself coming down on the side of Benjamin Franklin, who said, “It is better 100 guilty persons should escape [justice] than that one innocent person should suffer.”

It’s a maxim Franklin applied to our criminal justice system, but I think it’s relevant here.

If you attend a neo-Nazi rally and decide not to bring your pointy hood, you risk family and friends seeing your face plastered all over the news.

But let’s not allow the internet’s version of mob mentality to dictate the rules for our society.

There is a reason John Adams insisted “we are a nation of laws, not of men.” There is a reason our Founding Fathers chose to make this nation a constitutional republic instead of one ruled only by the majority.

The internet is a powerful tool, but one better used to facilitate dialogue with others, and not as a weapon to bludgeon them. The internet may be a billion voices, but it can also wear a billion boots. Let’s not trample the innocent in our mad rush to condemn the justifiably horrific.

If you’d like to be my third follower on twitter, you can find me @realEricAustin or email me at

TECH TALK: How technology could save our Republic

Gerry-mandering explained. (image credit: Washington Post)


by Eric W. Austin
Computer Technical Advisor

Elbridge Gerry, second-term governor of Massachusetts, is about to do something that will make his name both legendary and despised in American partisan politics. It’s two months after Christmas, in a cold winter in the year 1812.

Typical of a politician, the next election is forefront in his mind. And Gerry has reason to worry.

Elections in those days were a yearly affair. Between 1800 – 1803 Gerry had lost four elections in a row to Federalist Caleb Strong. He didn’t dare run again until after Strong’s retirement in 1807. Three years later, though, Elbridge Gerry gathered his courage and tried again.

This time he won.

Gerry was a Democratic-Republican, but during his first term the Federalists had control of the Massachusetts legislature, and he gained a reputation for championing moderation and rational discourse.

However, in the next election cycle his party gained control of the Senate and things changed. Gerry became much more partisan, purging many of the Federalist appointees from the previous cycle and enacting so-called “reforms” increasing the number of judicial appointments, which he then filled with Republican flunkies.

The irony of this is that Gerry had been a prominent figure in the U.S. Constitutional Convention in the summer of 1787, where he was a vocal advocate of small government and individual liberties. He even angrily quit the Massachusetts’ ratifying convention in 1788 after getting into a shouting match with convention chair, Francis Dana, primarily over the lack of a Bill of Rights. (The first 10 amendments later became the Bill of Rights.)

But none of this is what Elbridge Gerry is remembered for.

That came in the winter of 1812 when he signed into law a bill redrawing voting districts in such a way that it gave electoral advantage to his own Democratic-Republican party.

Political cartoon from the early 1800’s.

The move was highly successful from a political standpoint, but unpopular. In the next election, Gerry’s Democratic-Republican party won all but 11 seats in a State Senate that had – only the year before – been controlled by the Federalists. This, despite losing a majority of the seats in the House by a wide margin, and the governorship as well as: his old Federalist nemesis, Caleb Strong, came out of retirement to defeat him.

According to a legendary account from the period, someone posted a map of the newly-drawn districts on the wall in the offices of the Boston Gazette. One of the editors pointed to the district of Essex and remarked that its odd shape resembled a salamander. Another editor exclaimed, “A salamander? Call it a Gerry-mander!”

Thus the first “Gerry-mander” was born.

Today the process of redrawing district boundaries in such a way as to favor one party over another is referred to as “gerrymandering.”

It is mandated in the Constitution that states be divided into districts of equal population. So, every ten years when a census is taken, states redraw voting districts based on population changes. In many states, the party that controls the state legislature at the time also dictates this. Predictably, gerrymandering is most dominant in these states.

By strategically drawing the district lines to give the ruling party election-advantage, that party can maintain their legislative power even if the majority of the population moves away from them in the following years.

According to a 2014 study conducted by The Washington Post, Republicans are currently responsible for drawing eight out of ten of the most gerrymandered districts in the U.S. This has resulted in the Democrats being under-represented by about 18 seats in the U.S. House of Representatives “relative to their vote share in the 2012 election.”

The most gerrymandered districts in the United States. (image credit: Washington Post)

Maine is one of the few states that has given this decision to an independent, bipartisan commission instead. That commission then sends a proposal for approval to the state legislature. Of course, we have it a bit easier, with only two districts to worry about.

For much of the nation, gerrymandering is still one of the most prevalent and democratically destructive practices in politics today.

It’s also notoriously difficult to eradicate.

The problem is that someone has to decide on the districts. And everyone is biased.

Even in the few cases where legal action has been brought against an instance of partisan gerrymandering, how does one prove that bias in a court of law? The quandary is this: in order to prove a district was drawn with bias intent, one must first provide an example of how the district would look if drawn without bias. But since all districts are drawn by people, there is no such example to use.

Because of this difficulty, in 2004 the Supreme Court ruled that such a determination constitutes an “unanswerable question.”

But that may be about to change.

There is currently a major redistricting case before the Supreme Court. Professor Steve Vladeck, of Texas University’s School of Law, calls it “the biggest and most important election law case in decades.” It involves a gerrymandered district in Wisconsin.

The reason the courts are now taking these cases more seriously is because of recent advances in computer-powered analytics: technology may finally provide that elusive example of an unbiased district.

This week, August 7-11, a team of mathematicians at Tufts University is holding a conference on the “Geometry of Redistricting” to look at this very problem.

A number of mathematical algorithms have already been proposed to remove the human-factor from the process of redistricting.

Brian Olson, a software engineer from Massachusetts, has developed an algorithm which draws districts based on census blocks. His approach aims to make districts as compact as possible while maintaining neighborhood integrity.

The debate is still going on about which factors are most essential to a redistricting algorithm, but eventually one method will become standard and the days of gerrymandering will be over.

Poor Elbridge Gerry. After losing the Massachusetts governorship, he became vice president under James Madison and then died in office, becoming the only signer of the Declaration of Independence to be buried in America’s capital. But he’s mostly remembered for the despised political practice that bears his name. Hopefully, soon even that will be forgotten.

Good riddance Elbridge Gerry, I say. Good riddance, sir!

The difference a computer makes: The top image shows the districts of North Carolina as they are drawn today. The bottom image are districts drawn by an unbiased computer algorithm. Which looks more fair to you? (image credit: Washington Post)

TECH TALK: The Internet – At War with Itself


by Eric Austin
Computer Technical Advisor

There’s a war going on, although you might not be aware of it. It’s a war between the almighty dollar and the information superhighway.

I began my career in the early ‘90s, just as the internet-fueled tech boom was taking off. I’ve watched the internet grow from a tiny seed in the mind of Al Gore (ha ha) to the social and economic juggernaut that it is today.

But even from its very inception there were two competing ideas fighting to shape its future. One was an outgrowth of a cultural groupthink: the “hippie” movement of the internet, if you will. It’s an apt comparison, as the philosophy it inspired hearkens back to that optimistic era of peace and love.

This group believed the internet was a chance for humans to reinvent themselves. To escape the shackles of corporatism and Gordon Gekko-greed that had defined the previous decade of the 1980s.

The phrase “information wants to be free” defined this school of thought.

The “open-source” software movement, based on the idea of collaborative genius — that a group of unfettered minds could create something greater than any of its individual parts — gave birth to the Linux operating system, Firefox browser, VLC Media Player, GIMP and many other software programs. Each of us benefits from this movement whenever we download free software distributed under the GNU General Public Software License. And while it’s still only a sliver of the desktop market in comparison to Microsoft Windows, Linux dominates on mobile devices (56 percent) and powers more than 40 percent of the world’s web servers.

You can see the influence of this collaborative philosophy everywhere on the internet, and the world wide web is a better place because of it.

But there is another entity on the internet. A menacing, dark presence that wants to swallow up the hope and optimism of the free information movement. This force seeks to monetize and control the avenues of free access which the internet currently fosters. Rather than bettering society through collaborative social effort, this capitalist creature wants to conquer in the name of cold hard cash. It wants to turn the internet superhighway into a toll road.

This shadow over the internet is cast by ISPs, digital distribution giants and communication companies seeking to cement their dominance over their respective consumer markets.

The debate over Net Neutrality is the most recent battle to be waged in the war of $$ vs WWW. It promises to provide greater stability, consistency and service, but takes away freedom, ingenuity and the unexpected.

I’m here to tell you this is a war we need. It’s one of the good wars. This struggle is what keeps corporate greed on its toes. It leaves room for small start-ups to make an unexpected splash, and keeps established familiars from becoming complacent – yet provides the structure and efficiency that stimulates growth.

Without one we wouldn’t have great services like Netflix and Amazon. But without the other, great services like Netflix and Amazon never would have gotten the chance.

Net Neutrality must be retained because it levels the playing field. It doesn’t prevent bullies on the playground, but it makes sure everyone has a fighting chance.

Support Net Neutrality, not because it’s the right thing to do — even though it is. Support it because without the conflict it creates we wouldn’t have the dynamic technical environment that we’ve enjoyed for the last 20 years.

This is one time when conflict is good. Besides, it frustrates the corporate overlords.

Good. Keep them frustrated.

Get involved! Visit and join almost 11 million other Americans who have left comments with the FCC in support of Net Neutrality.

Further reading:

TECH TALK: Welcome to Reality 2.0


by Eric Austin
Computer Technical Advisor

Let me take you back a few decades to the 1980s. I was 12 years old and cruising around the neighborhood on my ten-speed mountain bike. On this particular day, I was exploring the garage sales along Lakeview Drive that are so prevalent this time of year.

At one of them, I found an old video game console for 75 cents and eagerly trundled it atop my bike for the trip home. It was one of those all-in-one units with the games built into it, and two controllers, then called “paddles,” with only a simple knob like a control switch for a dimmable light.

The first videogame: Pong.

All the games included were variations on Pong, in which each player controls a short, vertical line on opposite sides of the screen, moved up or down by the control knob on the game paddle. The objective of the game is to “bounce” a little white dot from one side of the screen to the other in order to score points against your opponent.

Nobody looking at a screenshot of this game would mistake it for an actual game of tennis.

Skip forward to the present day. Steam, the largest digital distribution platform on the web, has their Summer Sale and I pick up the game Grand Theft Auto 5 for 20 bucks.

GTA5 is one of the biggest videogame releases in recent years, with over 11 million copies sold within 24-hours of its debut. Basically, it’s a crime story told in a simulated world based on the Southern California city of Los Angeles and the surrounding countryside.

Consider just a few mind-blowing facts about the world of GTA5: The game world encompasses more than 100 square miles! You can fly a plane, ride a motorcycle, or go scuba-diving off the coast of California. If you stop your car in the middle of traffic, drivers around you will beep their horns and flip you the bird until you get moving again. If you make your character act crazy in the game, passers-by will pull out their phones and film you — just like real life!

I’m only 40 (okay forty-two!), but I’ve watched as our ability to simulate real life has gone from Pong, a rudimentary effort to simulate the game of tennis, to Grand Theft Auto 5, an incredibly detailed simulation of an entire city, down to building interiors, wildlife in the countryside, and artificial intelligence-driven people that react to your actions on the fly.

Grand Theft Auto 5: An entire simulated city.

Considering this kind of advancement just in my short life, what kind of worlds will we be able to simulate in another 50 years? If the past is anything to go by, computer simulations of the future will be so real that they will be indistinguishable from actual reality. Already it is difficult to watch a movie today and know which parts of it are real and which are computer generated. Combine this graphics realism with advances in computing power and artificial intelligence and it is not difficult to imagine what videogames of the future might be like.

This kind of thinking has led a number of brilliant minds, as diverse as entrepreneur Elon Musk and astrophysicist Neil deGrasse Tyson, to ask: Are we already living in a simulated world? Would we be able to tell if we were?

The argument goes something like this:

We can assume that, in the future, it will be possible to simulate reality to the extent that it is impossible to distinguish it from the real thing. Further, it is an obvious assumption that there will be a greater number of simulated worlds than actual worlds. One can then also assume that some of those simulated worlds would be simulations of the past, such as earth in the year 2017. And since there is only one actual Earth 2017, but many possible simulations of Earth 2017, it therefore is more likely we are living in a simulation than not. For example, if there are a billion simulated versions of Earth 2017, but only one actual Earth 2017, the odds that we are living in the real world and not a simulated one would be a billion to one against.

Consider something even weirder. In a video game-simulated world, your computer only renders the part of the virtual world you are currently experiencing. So, when you are looking in a specific direction in the game world, your computer renders the graphics for the part of the world you are seeing, but not for anything that is currently off-screen. It does this to save processing power.

Well, the “real” world eerily works in a very similar way. According to the Copenhagen interpretation of quantum mechanics, “physical systems generally do not have definite properties prior to being measured” (Wiki, 2017). In other words, quantum particles do not exist in a specific place and time until they are interacted with – something termed in physics as the “(probability) wave function collapse,” in which all possible values (of location, of momentum, etc…) collapse to a single value at the moment of interaction. It is almost as if the universe is a quantum computer which saves processing power by not calculating exact values for reality until it becomes necessary by interaction with an observer. Weird, huh?

Is it possible that we are unwitting inhabitants of an enormous simulation powered by a quantum computer existing sometime in the future?

Are your neighbors simply advanced A.I. personalities designed to give this simulation a veneer of realism? Could we all simply be self-aware A.I. placed into a simulation of earth in the year 2017 and programmed to believe this is not a simulation?

Of course, would I be able to ask these questions if we were?

Do you know someone living in their own simulation of reality? Come share your experience on or send an email to me at!

TECH TALK: The importance of backing up your computer


by Eric Austin
Computer Technical Advisor

This past weekend I was the unfortunate victim of a hard drive crash. I have multiple drives installed in my computer, and this was my main Windows system drive. Even more infuriatingly, the drive was less than a year old.

It took me two days to diagnose the problem, pull out the bad drive and install a new one. And it got me thinking about how important backing up your data can be! Here are a few best practices to keep in mind.

pc computer hard drive crash

Don’t let this happen to you!

Consider using a separate drive for your data.

You’ll want to install your operating system (OS) to the fastest drive attached to your computer, which is typically your internal hard drive, so use this drive to install programs or games. But since this is also the drive that is used most often, writing and reading as your system runs, it’s therefore the drive most likely to fail.

So use another physical drive to store your personal data (e.g. pictures, documents, etc…). The simplest solution for this is to invest in a flash drive that can be plugged into a spare USB port. A 64 GB flash drive is currently available on Amazon for only $15.99. The advantage to this is how easy it is to unplug the drive and take your data with you as the need arises.

Luckily, I followed this advice myself and didn’t lose any significant data when my system drive crashed.

You might also consider cloud solutions to back up your data. Most cloud storage solutions like Dropbox, Apple’s iCloud or Microsoft’s OneDrive, allow you to set up automatic syncing so that certain folders on your hard-drive are always synced with a copy of your data stored in the cloud. Although all of these cloud solutions have free options, you’ll likely need to pay a subscription if you want to store a large amount of data.

There are a number of good automated back-up systems available, including Apple’s excellent Time Machine utility that comes packaged in OS X, or Windows Backup and Restore tool. Most of these solutions require an external drive dedicated to backing up (and can’t be used for anything else). But with the cheap availability of hard drives, especially flash drives, this is certainly an option you should look into if you don’t want to mess with manually copying the data yourself.

Another option is to invest in a Blu-ray drive that lets you back-up to a Blu-ray disc which can hold up to 47 gigabytes. This is a good option if you want a portable back-up that can be stored off-site.

Whichever solution you choose, build in some redundancy. This means that if you back up your data every month to one external drive, then back it up every six months to a different drive, so that when your first back-up fails (and it will), you won’t be completely SOL. Even better, take that second back-up and store it in a separate location from the first, like a safety deposit box or a friend’s house. This is so that if your house burns down or is burgled (God forbid!) you’ll have another back-up to (pardon the pun) fall back on.

Ransomeware screenshot (image source: The New York Times)

A hard drive crash or natural disaster isn’t the only reason to make sure you always have a recent back-up of your data. WannaCry is a computer virus that hit the entire planet earlier this year. It’s a particular kind of virus called “Ransomware” that invades your computer, encrypts all of your data (making it inaccessible to you), and then shows you a screen demanding a wire transfer of $2,000 or it will delete your data.

A lot of people paid that ransom because they didn’t have a recent back-up of their data.

Don’t wait till it happens to you. Start backing up your data today!

Have a question or idea for a column? Send me an email at or leave a comment on!

TECH TALK: Human or A.I.? The Thin Blurred Line


by Eric Austin
Computer Technical Advisor

The land line rang at 5:29 p.m. Suspicious. I picked up the hand-set. “Hello?”

“Hello! Do you have a few minutes to talk this evening?”

It was a vivacious young lady. But something was off. I couldn’t put my finger on it. “What is this about?” I asked rudely. Vivacious or not, I was a repeat victim of dinner-time telemarketers.

“We’ve just started a new fund-raising campaign for breast cancer research and, um —”

It was the ‘um’ that did it. It didn’t sound natural. It sounded like someone had written ‘um’ into their script in order to trick me into thinking I was speaking to a real person.

“Are you a recording?” I said abruptly, in the middle of her spiel.

The lady’s voice broke off in mid-word. “Yes.” The answer came back immediately.

I hung up the phone with a little chill that traveled up my spine and prickled the hairs on the back of my neck. There is something slightly disturbing about thinking you are speaking to a living, breathing human being only to find out it was a computer instead.

And it got me thinking. Is there any law on the books requiring an Artificial Intelligence to tell you it is an artificial intelligence? After a bit of research, I found an article Time magazine published in 2013 detailing their encounter with a “robot telemarketer” named Samantha West that had refused to admit she wasn’t real.

That was in 2013. There have been major advances since then. Apple just had its World Wide Developers Conference where they announced a new voice technology for their computer assistant Siri. It leverages new advances in machine learning to create a computer-generated voice that is indistinguishable from the real thing. This isn’t a computer awkwardly parsing together pre-recorded words of a real human being. This is a voice generated on the fly by a computer that sounds as natural as yours or mine.

In fact, several companies, including Google, are using machine learning and advanced algorithmic programming to develop technologies that allow them to simulate real voices using as little as 60 seconds of data. In other words, feed in 60 seconds of dialog from George Clooney’s latest movie, and you’ll be able to make ole Georgie say anything else you like.

This means that soon you won’t be buying an audiobook read by the actress Meryl Streep. Instead Ms. Streep will simply license her voice and you’ll be listening to a simulation of Meryl Streep reading the book. And you won’t have famous actors doing voice-overs for the latest animated Pixar flick, rather you’ll be watching a movie with characters voiced by a computer simulating famous actors.

But don’t worry, you probably won’t even know the difference! That’s how good the technology has become. The upside is that actors will be able to lend their voices long after they are dead and buried. The downside is not knowing if that is creepy or cool?

But combine this new voice synthesis technology with recent developments in artificial intelligence and you start to have a combination that sounds ripe for abuse.

Think telemarketers are bad now? What happens when companies no longer need to hire real humans to make the calls?

In fact, it’s likely that you have already had a conversation with an artificial intelligence and not even known it. If you’ve ever gone online to “chat” with technical support, there is a good chance you were speaking with a chatbot and not a real person.

Chatbots are artificial intelligence-driven conversation generators that simulate real human interaction. There are online chatbots designed to be your girlfriend (Julie), your psychiatrist (ELIZA), your doctor (Dr. A.I.), technical support assistants and many other things. Since 1991, there is even an annual award, the Loebner Prize, given out to the best chatbot.

Siri and Google Assistant are both based on research into chatbots. The technology to create an artificial intelligence that can carry on human-like conversations has long been in development, but it’s only recently started to be used in mainstream electronics like Amazon’s Echo or Apple’s just announced Homepod.

Personally, the idea of speaking to a computer doesn’t bother me. But a computer that uses “um” in an effort to make me think it’s human? That’s disconcerting.

Eric W. Austin is a real, live human being. Or is he? To find out, email him at or leave a comment on this article at

Click here for a HUGE list of chatbots!

TECH TALK: How to get your news from the internet


by Eric Austin
Technical Advisor

Image Credit: Vanessa Otero, Facebook

Ah, information. The internet has so much of it! In this climate of political chaos, news breaks faster than most of us can keep up. Fortunately, the internet is here to fill our heads with all kinds of wrong information!

While the convenience of the internet is undeniable, information no longer comes with the guaranteed editorial oversight of a print newspaper or magazine. That means more of the responsibility is on us, as consumers, to discern good information from bad. This is particularly true of current news, as it is often reported before all the facts are in.

In this week’s column, I’d like to convey a few tips I rely on to sift through all the information on the internet and figure out what’s really going on!

Know Your Bias. Everyone has a bias, and every source has one as well. It’s inescapable and unavoidable, but as they used to say on Saturday morning cartoons when I was a kid: Knowing is half the battle. Be aware of how your own bias might color your perspective and dictate which sources of information you gravitate to. Purposely expose yourself to the other side — if for no other reason than so you can understand what information other people are using to reach their own conclusions.

There are a number of resources online that examine bias in the media. is a multi-partisan, crowd-sourced website that examines bias in the media and tries to present multiple perspectives of controversial issues., a site sponsored by the Pew Research Center, is another good resource.

Use Multiple Sources. The great thing about the internet is how easy it is to check multiple sources for a broader perspective. Once you’ve identified which way your preferred news outlet leans, take a look at a respected source that leans in the other direction!

But don’t just stop there! Check out some of the English-language news outlets from around the world, like BBC News and the Middle-Eastern Al Jazeera. It can be enlightening to hear what journalists and pundits outside America have to say about us and the conflicts in which we’re embroiled.

YouTube is a great resource for checking out a variety of sources, as most of the major networks have channels on Google’s video site. Everything is uploaded as three to five minute clips of a particular news item, so it’s easy to add news clips from multiple sources to your “Watch Later” playlist for back-to-back viewing.

Independent, internet-only news stations have also blossomed, especially if you’re interested in what the younger generation is talking about and listening to. The Young Turks and Democracy Now! are two of the most popular and each have channels on YouTube.

Image Credit:, Pew Research Center, “Trust levels of News Sources”

Don’t just get your news from Facebook! According to a recent article in Slate magazine, 44 percent of Americans primarily receive their news from the giant social media site. However, it’s easy to miss the source of an article when reading it on Facebook, and knowing the source of a particular bit of information is your greatest asset in determining if it is valid.

Facebook also tends to emphasize headlines and minimize context. This encourages us to have gut reactions to news rather than contemplating it thoughtfully, and encourages news sources to present the most salacious headline in an effort to capture more clicks.

And your Facebook feed is designed to give you more of what you ‘like.’ Facebook has a vested interest in showing you things in which you’re interested and will cater to your existing views. Every time you ‘like’ a news item someone shares, that preference factors back into Facebook’s algorithms in order to more finely tailor your feed. This subtly warps your views based on the news Facebook assumes you most want to see.

Bookmark some fact-checking sites. If you follow my advice, you might be dismayed to find a lot of conflicting reports, based on where you go for your information. Enter fact-checking sites. is a website maintained by the Tampa Bay Times in which reporters and editors evaluate statements made by politicians, pundits and media outlets. They were awarded the Pulitzer Prize for National Reporting in 2009, and have a great format that allows you to fact check by politician, news channel (ABC, CBS, NBC, CNN, and Fox), or political pundit. But prepare to be shocked when you learn how often your favorite talking head makes inaccurate or outright false statements!

Like in everything else, checking multiple sources is your best bet in evaluating truth, so also visit, The Washington Post’s “Fact Checker,” and which looks at the veracity of rumors and urban legends circulating the internet.

Finally, know the difference between news and opinion. Good sources will make it clear which stories are straight news and which are opinion pieces. Know which is which before you start reading!

In this day and age, separating true information from false can be challenging. But if you follow these tips, you’re more likely to have an informed and balanced view of the world around you! Good luck!

Have a comment on something you read? Know of a resource I failed to mention? Let your voice be heard on or email me directly at!