ERIC’S TECH TALK – My life in video games: a trip through gaming history

King’s Quest III: To Heir is Human (1986)

by Eric W. Austin

It was sometime in the mid-1980s when my father took me to a technology expo here in Maine. I think it was held in Lewiston, but it might have been some other place. (Before I got my driver’s license, I didn’t know where anything was.) This was at a time when you couldn’t buy a computer down at the local department store. You had to go to a specialty shop (of which there were few) or order the parts you needed through the mail. Or you could go to a local technology expo like we were doing.

They didn’t have fancy gadgets or shiny screens on display like you might see today. No, this was the age of hobbyists, who built their own computers at home. It was very much a DIY computer culture. As we walked through the expo, we passed booths selling hard drives and circuit boards. For a twelve-year-old kid, it wasn’t very exciting stuff. But then we passed a booth with a pile of videogames and my interest immediately piqued.

My father didn’t have much respect for computer games. Computers were for work in his view. Spreadsheets and taxes. Databases and word processing. But I was there for the games.

I dug through the bin of budget games and pulled out the box for a game called King’s Quest III: To Heir is Human. The game was released in 1986, so the expo must have taken place a year or two after that. The King’s Quest games were a popular series of adventure games released by the now defunct developer, Sierra On-Line.

Somehow I convinced my father to buy it for me, but when I got home I found to my disappointment that it was the PC DOS version of the game and would not play on my Apple II computer. I never did get a chance to play To Heir is Human (still one of the cleverest titles for a game ever!), but I never lost my fascination with the digital interactive experience of videogames.

The first videogames did not even involve video graphics. They were text adventure games. I remember playing The Hitchhikers Guide to the Galaxy Game (first released in 1984), which was a text adventure game based on the book series of the same name by Douglas Adams, on a PC in the computer lab at Winslow High School. These games did not have any graphics and everything was conveyed to the player by words on the screen. You would type simple commands like “look north” and the game would tell you there was a road leading away from you in that direction. Then you would type “go north” and it would describe a new scene. These games were like choose-your-own-adventure novels, but with infinitely more possibilities and endless fun. Who knew a bath towel could save your house from destruction or that you could translate alien languages by sticking a fish in your ear?

Wizardry VI: Bane of the Cosmic Forge (1990)

One of my first indelible gaming experiences was playing Wizardry VI: Bane of the Cosmic Forge (released 1990) in my father’s office on a Mac Lisa computer with a six-inch black and white screen. These sorts of games were commonly called “dungeon crawlers” because of their tendency to feature the player exploring an underground, enclosed space, searching for treasure and killing monsters. As was common for the genre at the time, the game worked on a tile-based movement system: press the forward key once, and your character moved forward one space on a grid. The environments for these types of games typically featured a labyrinthine structure, and part of the fun was getting lost. There was no in-game map system, so it was common for players to keep a stack of graph paper and a pencil next to their keyboard. With each step, you would draw a line on the graph paper and using this method you could map out your progress manually for later reference. Some games came with a map of the game world in the box. I remember that Ultima V: Warriors of Destiny (1988) came with a beautiful cloth map, which I thought was the coolest thing ever included with a game.

As I grew up, so did the videogame industry. The graphics improved. The games became more complex. As their audiences matured, games flirted with issues of violence and sexuality. Games like Leisure Suit Larry (1987) pushed into adult territory with raunchy humor and sexual situations, while games like Wolfenstein 3D (1992) had you killing Nazis in an underground bunker in 1940s Germany, depicting violence like never before. These games created quite the controversies in their day from people who saw them as corrupt indicators of coming societal collapse.

Wolfenstein 3D (1992)

In 1996, I bought my first videogame console, the original PlayStation. At the time, Sony was taking a giant gamble, releasing a new console to compete with industry juggernauts like Nintendo and SEGA. The first PlayStation console was the result of a failed joint-effort between Sony and Nintendo to develop a CD-ROM peripheral for the Super Nintendo Entertainment System (SNES), a console released in 1991 in North America. When that deal fell through, Sony decided to develop their own videogame system, and that eventually became the PlayStation.

There was a lot of debate during these years about the best medium for delivering content — solid-state cartridges, which were used for the SNES and the later Nintendo-64 (released 1996), or the new optical CD-ROMs used by Sony’s PlayStation. The cartridges used by Nintendo (and nearly every console released before 1995) featured faster data transfer speeds over optical CDs but had a smaller potential data capacity. Optical media won that debate, as the N-64 was the last major console to use a cartridge-based storage format for its games. Funnily enough, this debate has come full-circle in recent years, with the resurgence of cartridge-based storage solutions like flash drives and solid state hard drives. Those storage limitations of the past have mostly been solved, and solid state memory still offers faster data transfer rates over optical options like CD-ROMs or DVDs (or now Blu-Ray).

Star Wars arcade game (1983)

The PlayStation was also built from the ground up to process the new polygonal-based graphic technology that was becoming popular with computer games, instead of the old sprite-based graphics of the past. This was a graphical shift away from the flat, two dimensional visuals that had been the standard up to that point. This shift was an evolution that had taken place over a number of years. First, there was something called vector graphics, which were basically just line drawings in three-dimensional space. I remember playing a Star Wars arcade game (released 1983) with simple black and white vector graphics down at the arcade that used to be located next to The Landing, in China Village, when I was a kid. The game simulated the assault on the Death Star from the original 1977 movie and featured unique flight-stick controls that were very cool to a young kid who was a fan of the films.

Videogame consoles have changed a lot over the years. My cousin owned an SNES and used to bring it up to my house in the summers to play Contra and Super Mario World. Back then, the big names in the industry were Nintendo, SEGA and Atari. Nintendo is the only company from those days that is still in the console market.

Up until the late 1990s, each console was defined by its own unique library of games, with much of the development happening in-house by the console manufacturers. This has changed over the years so that nearly everything today is made by third-party developers and released on multiple platforms. In the early 2000s, when this trend was really taking off, many people theorized it would spell doom for the videogame console market because it was removing each console’s uniqueness, but that has not turned out to be the case.

Videogames are usually categorized into genres much like books or movies, but the genres which have been most popular have changed drastically over the years. Adventure games, usually focused on puzzles and story, ruled the day in the early 1980s. That gave way to roleplaying games (RPGs) through the mid-’90s, which were basically adventure games with increasingly complex character progression systems. With the release of Wolfenstein 3D from id Software in 1992, the world was introduced to the first person shooter (FPS) genre, which is still one of the most popular game types today.

The “first person perspective,” as this type of game is called in videogame parlance, had previously been used in dungeon crawlers like the Wizardry series (mentioned above) and Eye of the Beholder (1991), but Wolfenstein coupled this perspective with a type of action gameplay that proved immediately popular and enduring. Another game I played that would prove to be influential for these type of games was Marathon, an alien shooter game released in 1994 for the Apple Macintosh and developed by Bungie, a studio that would later go on to create the incredibly popular Halo series of games for Microsoft’s Xbox console.

One of the things I have always loved about videogames is the way the industry never sits still. It’s always pushing the boundaries of the interactive experience. Games are constantly being driven forward by improving technology and innovative developers who are searching for new ways to engage players. It is one of the most dynamic entertainment industries operating today. With virtual reality technology advancing quickly and promising immersive experiences like never before, and creative developers committed to exploring the possibilities of emergent gameplay afforded by more powerful hardware, I’m excited to see where the industry heads in the coming decades. If the last thirty-five years are any indication, it should be awesome!

Eric W. Austin writes about technology and community issues. Contact him by email at

ERIC’S TECH TALK: CBC wants to revolutionize internet access in China, but will it work?

by Eric W. Austin

The views of the author in the following column are not necessarily those of The Town Line newspaper, its staff and board of directors.

On the ballot this November is a question that has the potential to revolutionize internet access for residents of China. The question is also long, at over 200 words, a bit confusing and filled with legalese. As a resident of China, a technophile, and a reporter for The Town Line newspaper, I wanted to understand this initiative, figure out exactly what it’s attempting to accomplish, and try to find out what residents of China think about the future of local internet access.

In order to understand the issue, I attended two of the recent information sessions held by the China Broadband Committee and also sat down with Tod Detre, a member of the committee, who I peppered with questions to clear up any confusions I had.

I also created a post in the Friends of China Facebook group, which has a membership of more than 4,000 people from the town of China and neighboring communities, asking for comments and concerns from residents about the effort. Along with soliciting comments, I included in my post a survey question asking whether residents support the creation of a fiber optic infrastructure for internet access in China. (I should be clear here and point out that the question on the November ballot does not ask whether we should build a fiber optic network in China, only whether the selectboard should move forward with applying for financing to fund the initiative if they find there is sufficient interest to make the project viable. But for my purposes, I wanted to understand people’s thoughts on the goals of the effort and how they felt about their current internet access.)

My Facebook post garnered 86 comments and 141 votes on the survey question. One hundred and twenty people voted in favor of building a fiber optic network in China and 21 people opposed it. (This, of course, was not a scientifically rigorous survey, and the results are obviously skewed toward those who already have some kind of internet access and regularly utilize online platforms like Facebook.)

Before we get into the reasons why people are for or against the idea, let’s first take a look at what exactly the question on the ballot is and some background on what has led up to this moment.

The question before voters in November does not authorize the creation of a fiber optic network in China. It only authorizes the selectboard to begin the process of pursuing the financing that would be required to accomplish that goal – but only if certain conditions are met. So, what are those conditions? The most important condition is one of participation. Since the Broadband Committee’s goal is to pay for the fiber optic network solely through subscriber fees – without raising local taxes – the number of people who sign up for the new service will be the primary determining factor on whether the project moves forward.

If the question is approved by voters, the town will proceed with applying for financing for the initiative, which is projected to have a total estimated cost of about $6.5 million, paid for by a bond in the amount of $5.6 million, with the remainder covered through a combination of “grants, donations and other sources.” As the financing piece of the project proceeds, Axiom, the company the town plans to partner with to provide the internet service, will begin taking pre-registrations for the program. Although the length of this pre-registration period has not been completely nailed down, it would likely last anywhere from six months to a year while the town applies for financing. During this period, residents would have an opportunity to reserve a spot and indicate their interest in the new service with a refundable deposit of $100, which would then be applied toward their first few months’ of service once the program goes live. Because the plan for the initiative is for it to be paid for by subscriber fees rather than any new taxes, it is essential that the project demonstrates sufficient interest from residents before any work is done or financing acquired.

With approximately 2,300 structures, or households, that could potentially be connected to the service in China, the Broadband Committee estimates that at least 834 participants – or about 36 percent – would need to enroll in the program for it to pay for itself. Any number above this would create surplus revenue for the town, which could be used to pay off the bond sooner, lower taxes, reduce subscriber fees or for other purposes designated by the selectboard. If this number is not reached during the pre-registration period, the project would not proceed.

One of the problems this initiative is meant to alleviate is the cost of installing internet for residents who may not have sufficient internet access currently because bringing high speed cable to their house is cost prohibitive. The Broadband Committee, based on surveys they have conducted over the last several years, estimates that about 70 percent of residents currently have cable internet. The remaining 30 percent have lower speed DSL service or no service at all.

For this reason, for those who place a deposit during the initial signup period, there would be no installation cost to the resident, no matter where they live, including those who have found such installation too expensive in the past. (The lone exception to this guarantee would be residents who do not have local utility poles providing service to their homes. In those rare instances, the fiber optic cable would need to be buried underground and may incur an additional expense.) After the initial pre-registration period ends, this promise of free installation would no longer be guaranteed, although Axiom and the Broadband Committee have talked about holding rolling enrollment periods in the future which could help reduce the installation costs for new enrollees after the initial pre-registration period is over.

What are the benefits of the proposed fiber optic infrastructure over the cable broadband or DSL service that most residents have currently? Speed and reliability are the most obvious benefits. Unlike the copper cable used currently for cable internet, which transmits data via electrical pulses, fiber optic cable transmits data using pulses of light through fine glass fibers and does not run into the same limitations as its copper counterpart. The speed at which data can be transmitted via fiber optic cable is primarily limited by the hardware at either end of the connection rather than the cable itself. Currently, internet service travels out from the servers of your internet provider as a digital signal via fiber optic cable, but then is converted to an analogue signal as it is passed on to legacy parts of the network that do not have fiber optics installed. This process of conversion slows down the signal by the time it arrives at your house. As service providers expand their fiber optic networks and replace more of the legacy copper wire with fiber optics, the speed we experience as consumers will increase, but it is still limited by the slowest point along the network.

The proposed fiber optic network would eliminate this bottleneck by installing fiber optic cable from each house in China back to an originating server with no conversion necessary in between.

Both copper and fiber optic cable suffer from something called “attenuation,” which is a degradation of the strength of the signal as it travels further from its source. The copper cables we currently use have a maximum length of 100 meters before they must be fed through a power source to amplify their signal. In contrast, fiber optic cables can run for up to 24 miles before any significant weakening of the signal starts to become a problem. Moving from copper cable to fiber optics would virtually eliminate problems from signal degradation.

Another downside to the present infrastructure is that each of those signal conversion or amplification boxes require power to do their job. This means that when the power goes out, it shuts off the internet because these boxes along the route will no longer function to push the signal along. The infrastructure proposed by the China Broadband Committee would solve this problem by installing fiber optics along the entire signal route leading back to a central hub station, which would be located in the town of China and powered by a propane generator that will automatically kick on when the power goes out. With the proposed system, as long as you have a generator at your house, your internet should continue to work – even during a localized power outage.

There’s an additional benefit to the proposed fiber optic network that residents would notice immediately. The current cable internet that most of us use is a shared service. When more people are using the service, everyone’s speed decreases. Most of us know that the internet is slower at 5 o’clock in the afternoon than it is at 3 in the morning. The proposed fiber optic network is different however. Inside the fiber optic cable are hundreds of individual glass strands that lead back to the network source. A separate internet signal can ride on each of these strands without interfering with the others. Hawkeye Connections, the proposed contractor for the physical infrastructure part of the project, would install cable with enough individual strands so that every house along its path could be connected via a different strand within the cable. This means that no one would be sharing a signal with anyone else and internet slowdown and speed fluctuations during peak usage should become a thing of the past.

Another change proposed by the CBC initiative would be to equalize upload and download speeds. Presently, download speeds are generally higher than upload speeds, which is a convention in the industry. This is a legacy of the cable TV networks from which they evolved. Cable TV is primarily a one-way street datawise. The video information is sent from the cable provider to your home and displayed on your TV. Very little data is sent the other way, from your home back to the cable provider. This was true of most data streams in the early days of the internet as well. We downloaded pictures, videos and webpages. Nearly all the data was traveling in one direction. But this is changing. We now have Zoom meetings, smart houses and interactive TVs. We upload more information than we used to, which means upload speed is more important than ever. This trend is likely to continue in the years ahead as more of our lives become connected to the internet. The internet service proposed by the Broadband Committee and Axiom, the company contracted to provide the service, would equalize upload and download speeds. For example, the first tier of the service would offer speeds of 50 megabits up and 50 megabits down. This, combined with the other benefits outlined above, should make Zoom meetings much more bearable.

What about costs for the consumer? The first level service tier would offer speeds of 50 megabits download and 50 megabits upload for $54.99 a month. Higher level tiers would include 100/100 for $64.99/month, 500/500 for $149.99/month, and a gigabit line for businesses at a cost of $199.99/month.

Now that we’ve looked at some of the advantages and benefits of the fiber optic infrastructure proposed by the China Broadband Committee, what about the objections? A number of residents voiced their opposition to the project on my Facebook post, so let’s take a look at some of those objections.

One of the most common reasons people are against the project is because they think there are other technologies that will make the proposed fiber optic network obsolete or redundant in the near future. The technologies most often referenced are 5G wireless and Starlink, a global internet initiative being built by tech billionaire and Tesla/SpaceX CEO Elon Musk.

While new 5G cellular networks are currently being rolled out nationwide, it’s not clear when the technology will be widely available here in China. And even when such capability does become available to most residents, it will likely suffer from similar problems that our existing cell coverage suffers from now – uncertain coverage on the outskirts of town and in certain areas. (I still can’t get decent cell reception at my home just off Lakeview Drive, in China Village.) Further, while 5G is able to provide impressive download speeds and low latency, it requires line of sight with the broadcasting tower and can easily be blocked by anything in between like trees or buildings. Residents of China who currently suffer from poor internet service or cell phone reception today would likely suffer from the same problems with 5G coverage as well. Fiber optic cable installation to those residents would solve that problem, at least in terms of internet access, once and for all.

Starlink is a technology that aims to deliver internet access to the world through thousands of satellites in low-earth orbit, but it is still years away from reaching fruition and there is no guarantee it will deliver on its potential. When I spoke with the Broadband Committee’s Tod Detre, he said he applied to be part of the Starlink beta program more than six months ago, and has only recently been accepted (although he’s still awaiting the hardware required to connect). There is also some resistance to the Starlink project, primarily from astronomers and other star gazers, who worry how launching so many satellites into orbit will affect our view of the night sky. As of June, Starlink has launched approximately 1,700 satellites into orbit and currently services about 10,000 customers. The initiative is estimated to cost at least $10 billion before completion. At the moment, the company claims to offer speeds between 50 and 150 megabits and hopes to increase that speed to 300 megabits by the end of 2021, according to a recent article on To compare, copper-based networks can support data transfer speeds up to 40 gigabits, and fiber optic wires have virtually no limit as they can send signals at the speed of light. Of course, these upper speeds are always limited by the capabilities of the hardware at either ends of the connection.

While both 5G and technologies like Elon Musk’s Starlink hold a lot of potential for consumers, 5G service is likely to suffer from the same problems residents are already experiencing with current technology, and Starlink is still a big unknown and fairly expensive at $99/month plus an initial cost of $500 for the satellite dish needed to receive the signal. It’s also fairly slow even at the future promised speed increase of 300 megabits. As the Broadband Committee’s chairman, Bob O’Connor, pointed out at a recent public hearing on the proposed network, bandwidth needs have been doubling every ten years and likely to continue increasing in a similar fashion for the near future.

Another objection frequently voiced by residents is that the town government should not be in the business of providing internet service to residents. O’Connor also addressed this concern in a recent public hearing before the China selectboard. He said that residents should think about the proposed fiber optic infrastructure in the same way they view roads and streets. (This is a particularly apt comparison since the internet is often referred to as the “information superhighway.”) O’Connor says that although the town owns the roads, it may outsource the maintenance of those roads to a subcontractor, in the same way that the town would own this fiber optic infrastructure, but will be subcontracting the service and maintenance of that network to Axiom.

The Broadband Committee also points out that there are some benefits that come with the town’s ownership of the fiber optic cable and hardware: if residents don’t like the service they are receiving from one provider they can negotiate to receive service from another instead. The committee has said that although Axiom would initially be contracted for 12 years, there would be a service review every three years to see if we are happy with their service. If not, we could negotiate with another provider to service the town instead. This gives the town significant leverage to find the best service available, leverage that we would not have if the infrastructure was owned by a service provider like Spectrum or Consolidated Communications (both of whom have shown little interest in the near term for upgrading the China area with fiber optic cable).

There are certainly risks and outstanding questions associated with the committee’s proposal. Will there be enough subscribers for the project to pay for itself? Could another technology come along that would make the proposed infrastructure obsolete or less attractive in the future? Will proposed contractors like Axiom and Hawkeye Connections (who will be doing the installation of the physical infrastructure) provide quality and reliable service to residents long-term? Can we expect the same level of maintenance coverage to fix storm damage and outages that we experience now?

On the other hand, the potential benefits of the project are compelling. The internet, love it or hate it, has become an essential part of everyday life and looks only to become more essential in the years ahead. Having a reliable and high speed infrastructure for residential internet access is likely to play an important role in helping to grow China’s economy and to attract young families who are looking for a place to live and work.

Ultimately, voters will decide if the potential benefits outweigh the possible risks and pitfalls come this November.

Contact the author at

More information is also available on the CBC website,

Read all of The Town Line’s coverage of the China Broadband Committee here.

ERIC’S TECH TALK: A primer for finding good information on the internet

by Eric W. Austin

The world is filled with too much information. We are inundated with information during nearly every moment of every day. This is a problem because much of it is simply spin and misinformation, and it can be difficult to separate the quality information from the background noise that permeates the internet.

I think being successful in this endeavor comes down to two things: learning to discern the quality sources from the sketchy ones, and getting in the habit of viewing a variety of sources before leaping to conclusions.

Let’s deal with the first one: quality sources. How do you determine the good sources from the bad?

To visualize the problem we’re dealing with, imagine a perfect source as a dot in the middle of a blank page. This hypothetical source is unbiased and completely reliable. (There is, of course, no such source or I would simply recommend it to you and this would be a very short article.)

Now imagine each and every source on the internet as another dot on this page. The distance each source is from the center dot is an indication of greater bias and lower reliability.

Oh, but you might complain, this is such a highly subjective exercise! And you would be absolutely right. Judging the quality of information on the web is not a hard science; it is a skill you need to develop over time, but it is also a skill which has become more and more essential to life in the modern age.

As a part of this mental exercise it’s important to be aware of the subjective weaknesses inherent in the human condition that are likely to trip you up. For example, we are much more likely to judge sources which align with our existing views as less biased than those sources which do not. So, you need to compensate for that when drawing the mental picture that I described above.

When I was learning to drive, our driver’s education teacher emphasized the importance of looking at both side mirrors, the rearview mirror and glancing over my shoulder before making any move in traffic such as changing lanes. Why wasn’t it sufficient to rely on only a single method to judge the safety of an action before taking it? Because each method has a blind spot which can only be compensated for by employing more than one tactic prior to making a decision. Using overlapping sources of information decreases the chances of missing something important.

Judging information on the internet is kind of like that: no one method is going to be sufficient and each will have a particular blind spot which can only be counterbalanced by employing multiple solutions.

Certain online resources can help you with drawing a more accurate picture of the sources on which you rely. The website assesses more than 3,600 websites and news sources for bias and credibility across the internet on both the right and the left. is another resource which rates the political bias of websites and often places news stories from the left and right side by side so you can see how specific information is being presented. Allsides also has a handy chart rating the bias of the most well-known news sources from across the political spectrum. I don’t always perfectly agree with the ratings these sites supply (and neither will you), but it is a good place to start and should be another tool in your information-analysis utility belt.

If you are confronted with a source you do not have any prior experience with, search for it using the above resources and also do a web search for the name of the website. There may be a Wikipedia page about it that will tell you where the site’s funding comes from and whether the site has been caught peddling false information in the past. A web search may also dig up stories by other news sources reporting on false information coming from that website. There is nothing news sources like better than calling out their rivals for shoddy reporting. Use that to your advantage.

If a web search for the site turns up nothing, that could be a warning signal of its own. On the internet, it is absurdly easy to throw up a website and fill it with canned content, interspersed with propaganda or conspiracy theories to draw internet clicks and advertising dollars. It is becoming increasingly common for politically motivated groups to create credible-looking news sites in order to push a specific ideological agenda, so look for sources with some history of credibility.

So, what about bias? Isn’t everything biased? Well, yes, which is why our unbiased and perfectly reliable source above is only hypothetical. The skill you must develop is in determining how far each source is from matching that hypothetical ideal, and then building a well-rounded collection of credible sources representing various points of view.

One thing that must be mentioned is that bias and credibility are not mutually exclusive. Although sources that are highly biased are also more likely to lack credibility, this is not necessarily a strict correlation. In determining the credibility of a source, bias is only one of the factors to consider.

Let’s take a look at two news sources on opposite sides of the political spectrum: Fox News and CNN.

Initially, you might be tempted to think these sources are the worst examples to use in a discussion of reliable sources because of their high level of bias, but I would like to argue the opposite. First, it is important to recognize the difference between news and opinion. Most large news organizations separate their news reporters from their opinion commentators. If a website does not make this difference very apparent to the consumer then that may not be a source you want to trust. Separating news from editorial content is a standard policy because bias is a well-known problem for most news organizations and separating these two areas is a safeguard against too much opinion bleeding into their news. Of course, this is not a perfect solution, but such a precaution is better than nothing, and smaller niche sites often do not have the resources or desire to make this distinction.

This does not mean that smaller niche sites cannot be valuable sources of information, especially if that information is of a sort in which the site specializes, but it is something to consider when evaluating the validity of information, especially about controversial topics.

Another reason to include several high profile news sites from both sides of the aisle in your list of sources is that any missteps by these organizations are less likely to escape notice than smaller niche news sites. You can bet CNN will be quick to pounce on any sort of shoddy reporting put out by Fox News and vice versa.

So, bias is not necessarily a bad thing. It is important that we have right-leaning news organizations to rigorously investigate left-leaning administrations, just as it’s important to have left-leaning news organizations to report on right-leaning administrations. That is the beautiful mess that is the American free press. Your best bulwark against bias is to have a diversity of credible sources at your disposal representing a wide range of viewpoints.

Remember that the best safeguard against our own biases is to seek out opposing opinions in order to constantly challenge our preconceptions and force ourselves to regularly reevaluate our conclusions. Nobody is right all the time, and most of us are wrong more often than we’d like to admit. Cognitive dissonance – that sense of discomfort we feel when encountering information which threatens to upend our carefully set up boundaries and views of the world – is not something to run from but to embrace. Finding out you are wrong is often the only way to discover what is right.

Eric W. Austin writes about local issues and technology. He can be reached at

ERIC’S TECH TALK: The 5G future and the fight with China

by Eric W. Austin

There’s a new wireless technology being rolled out this year that promises to be the biggest technological revolution since the invention of the cell phone. Dubbed 5G NR (“fifth generation new radio”), this isn’t just an upgrade to the existing 4G cellular network, but a radical reinvention of wireless technology that will require an enormous investment in new infrastructure, but also promises massive improvements in bandwidth, speed and latency.

This new cellular technology achieves these incredible improvements by making fundamental changes to the way cellular networks function. Whereas the old 4G technology used radio waves in the microwave band between 700 MHz and 3 GHz to communicate, 5G will tap into previously unused radio frequencies in ranges from 30 Ghz to 300 Ghz, known as millimeter bands. In addition, the new 5G technology will transmit across wider frequency channels of up to 400 Mhz, compared with 4G’s limit of only 20 Mhz.

Now, that may sound like a lot of technobabble, but it has real world implications, so let me explain.

A radio wave can be imagined as a wavy line traveling through space at the speed of light. Information is transmitted by manipulating the crests and valleys that make up that wavy line, much like the dots and dashes in Morse Code. The number of crests and valleys in a radio wave that pass a point in space in a specific amount of time determines the quantity of information transmitted. This is called the frequency of a radio wave. Since you can’t increase the speed at which a radio wave travels (it will always travel at the speed of light), the only way to increase information transfer is to increase the number of crests and valleys within a single radio wave. This is done by increasing its frequency. You can think of this as the difference between a wavy piece of string and a tightly coiled spring. While both the string and the spring are made from material of the same length, the spring will contain a greater number of crests and valleys and take up considerably less space. This is the basic concept behind the move in 5G to transmit using higher frequency radio waves.

Since the higher frequency radio waves of 5G technology are capable of transmitting a much greater amount of data than earlier microwave-based 4G technology, one can reasonably ask, why aren’t we using it already? The answer is simple. These high-frequency waves are much smaller, with their crests and valleys more tightly packed together, and therefore require receivers which are much more sensitive and difficult to manufacture. While such receivers have been available for military applications for a number of years, it has taken time for it to become cost effective to produce such receivers for wider commercial use. That time has now come.

The ability to fit more information into smaller transmissions, in addition to the use of wider frequency channels, means a hundredfold increase in data transfer times, and lower power consumption for devices.

However, there are also some significant downsides to using these higher frequencies. While millimeter waves can pack more information into a single broadcast, their shorter wavelength means they can also be easily blocked by obstacles in the environment and absorbed by atmospheric gases. Although the antennas needed to receive these transmissions will be much smaller than the giant cell towers in use today, we will need more of them because 5G antennas require line-of-sight in order to receive transmissions. Instead of cell towers every few miles, as we have for our current 4G/3G cellular network, hundreds of thousands of smaller antennas will have to be installed on office buildings, telephone poles and traffic lights.

This new 5G technology couldn’t have been implemented earlier because it requires the existing fourth generation infrastructure already in place in order to make up for these deficiencies.

While the new 5G technology has some real benefits to human user experience, like having enough bandwidth to stream 50 4K movies simultaneously, speeds that are 20 times greater than the average U.S. broadband connection, and the ability to download a high definition movie in less than a second, the real excitement lies in how this upgrade will benefit the machines in our lives.

A confluence of technologies ripening in the next few years are set to revolutionize our lives in a way that promises to be greater than the sum of the individual parts: this new, high-speed 5G cellular upgrade; artificial intelligence; and the rapidly widening world of the Internet of Things (IoT). These three technologies, each with astonishing potential on their own, will combine to change our lives in ways that we can only begin to imagine.

I have spent this article talking about 5G, and you have likely heard a bit about the emerging field of artificial intelligence, but the final item on this list, the Internet of Things, bears a bit of explaining. The Internet of Things is an industry buzzword referencing the increasing level of sophistication built into everyday appliances. Your car now routinely has cameras, GPS locators, accelerometers and other sensors installed in it. Soon nearly every electrical device in your house will be similarly equipped. In the future, when you run out of milk, your refrigerator will add milk to a list of needed items stored in the cloud. On your way to the grocery store, your home A.I. will send a message ahead of you and robots at the store will prepare a shopping cart with the requested items, which will be waiting for you when you arrive. Stepping through your front door after a long day at work, your phone will ping you with a list of recipes you can prepare for dinner based on the items you’ve recently purchased.

This is the Internet of Things. It’s every device in your life quietly communicating behind the scenes in order to make your life easier. Although this idea might seem a bit creepy at first, it’s coming whether you like it or not. According to statistics website, there are currently more than 26 billion devices worldwide communicating in this way. By 2025, that number is expected to top 75 billion.

The upgrade to 5G, with its increases in speed and bandwidth, is not so much a benefit to us humans as it is an aid to the machines in our life. As more and more devices come on line and begin to communicate with each other, the demand for greater speed and bandwidth will increase exponentially. Soon the devices in your house will be using more bandwidth than you are.

There are also some significant security concerns arising from the need to build additional infrastructure to support the new 5G network. It will require the installation of billions of antennas and 5G modems across the world, in every town, city and government building. But who will build them? According to a February 2019 article in Wired magazine, “as of 2015, China was the leading producer of 23 of the 41 elements the British Geological Society believes are needed to ‘maintain our economy and lifestyle’ and had a lock on supplies of nine of the 10 elements judged to be at the highest risk of unavailability.” With this monopoly on the materials needed for high tech production, Chinese companies like Huawei, which is already the largest telecommunications manufacturer in the world, are set to corner the market on 5G equipment.

You may have heard of Huawei from the news recently, as the U.S. government has accused the company of everything from violating international sanctions to installing backdoors in the hardware they manufacture on behalf of the Chinese government. China’s second largest telecommunications company, ZTE, who is also looking to seize a piece of the emerging 5G pie, has been the subject of similar accusations, and last year paid more than $1.4 billion in fines for violating U.S. sanctions against Iran and North Korea.

Do we really want to build our communications infrastructure with equipment made by companies with close ties to the Chinese government? It’s a real concern for security experts in the U.S. and other western countries. Fortunately, European companies like Nokia and Ericsson, South Korea’s Samsung and California’s Cisco Systems are emerging as threats to this Chinese monopoly.

The new technology of 5G is set to revolutionize cellular communications in the next few years, but the real story is how the confluence of technologies like artificial intelligence and the Internet of Things, in combination with this upgrade in communications, will change our lives in ways we can’t possibly foresee. The 5G future will be glorious, exciting, and fraught with danger. Are you ready for it?

ERIC’S TECH TALK: Where are all the aliens?

by Eric W. Austin

(The views of the author in the following column are not necessarily those of The Town Line newspaper, or its staff and board of directors.)

Where is everybody? That’s the question posed by Italian physicist and Nobel Prize winner Enrico Fermi in 1950 over a casual lunch with his fellow physicists at the famous Los Alamos National Laboratory, in New Mexico.

To understand Fermi’s question and why he’s asking it, we must first review a bit of background on Earth’s own rocky road to life.

The earth formed, scientists tell us, about 4.5 billion years ago, 9 billion years after the Big Bang. From a cosmological standpoint, the earth is a bit of a late-bloomer.

After forming, Earth was a hot ball of glowing, molten rock – much too hot for life – for nearly half a billion years, but eventually the surface cooled enough for the first oceans to form. Now the stage was set for life, but once conditions were favorable, how long did it take for life to develop on the new planet?

The answer, surprisingly, is not very long. According to some estimates, it may have been as few as a hundred million years after the earth cooled. That is, from the perspective of the Universe, hardly any time at all, just a geological blink of the cosmic eye.

Assuming this is true of life across the universe and not simply a cosmic fluke when it comes to Earth, we would expect star systems which formed much earlier than our own to have developed life billions of years before ours did.

Add to this the understanding that it has taken our species less than half a million years to go from tree-dwelling primates to radio-broadcasting prima donnas and it suggests that any civilization with as little as a million-year head start on us would have spread across half the galaxy before we had even crawled out of the trees.

So, where are all the aliens?

This question has perplexed scientists for more than half a century and is known as the Fermi Paradox.

“Fermi realized that any civilization with a modest amount of rocket technology and an immodest amount of imperial incentive could rapidly colonize the entire Galaxy,” says Seth Shostak, a senior astronomer at the Institute for the Search for Extraterrestrial Intelligence (SETI). “Within ten million years, every star system could be brought under the wing of empire.”

He continues, “Ten million years may sound long, but in fact it’s quite short compared with the age of the Galaxy, which is roughly ten thousand million years. Colonization of the Milky Way should be a quick exercise.”

This creates a bit of a quandary for those seeking for intelligent life out beyond our solar system. On one hand, life appeared on Earth very early in its history – almost immediately, once conditions were right – so we would expect life to have appeared elsewhere in the universe as expeditiously as it did here on Earth. Since there are many stars much older than our sun, it only makes sense that life would pop up in many parts of the universe long before it did here on Earth.

On the other hand, it’s hard to get past the fact that we haven’t yet found any signs of life – not a smidge, smudge or random radio signal beamed out from Alpha Centauri. Nothing. Nada. Zilch.

There must be something wrong with this picture.

Maybe, goes the thinking of some scientists, our assumption that life appeared very quickly here on Earth is wrong simply because the underlying assumption that life originated on Earth is wrong.

In other words, maybe life didn’t originate on Earth at all. Maybe it came from somewhere else. This idea is called the Panspermia Theory for the origin of life. The theory posits that life originated elsewhere in the universe and traveled here early in Earth’s history by way of an interstellar asteroid or meteor. Some scientists have even speculated that the impact resulting in the formation of our moon also brought with it the first microbes, seeding Earth with the life that would eventually evolve into you and me.

Where, though, did it come from? With a hundred billion stars in the observable universe, there’s a lot of places to choose from, but there is one very real possibility closer to home.

I’m speaking of Mars, the fourth planet from the sun, named for the Roman god of war. Mars is a little older than the earth at about 4.6 billion years, and although both planets began as fiery balls of molten rock, because Mars is located further from the sun and is only half the size of the earth, it cooled much faster. Scientists believe the now-dead planet was once covered with water and had a very temperate climate sometime in the distant past. The famous “canals” of Mars were not made by little green men, but carved by liquid water flowing across its surface.

When the fires of Mars’ molten core began to die, more than 4 billion years ago, the planet lost its atmosphere and was eventually freeze-dried by the relentless solar winds. At that point, any life it had either died or retreated far beneath the planet’s surface.

What this all means is that conditions were right for life on Mars hundreds of millions of years before conditions were right for it here on Earth.

If we are willing to accept that life sprung up on Earth in a very short time (geologically-speaking), then couldn’t the same also be true of Mars? If that is correct, then life could have appeared on Mars while Earth was still a smoldering hellscape. And if we grant these two suppositions, it is a small leap to think that the life we see on Earth actually originated first on Mars and traveled here early in our history.

Are we all originally Martians? It’s an intriguing possibility, and it’s a question to which we may soon have an answer. NASA’s InSight lander touched down on Mars just last month. They recently released the first recordings of a Martian wind rippling across the dusty planet, and the space agency currently has plans for a manned mission to Mars sometime in the 2030s. Once soil samples are brought back for analysis, we may finally be able to determine whether our conjecture of past life on Mars is true. It might also tell us whether that past life bears any resemblance to the life we find on Earth.

So, the next time you’re looking up at the night sky, admiring the cosmic majesty allowed by Maine’s clear view of the stars, give a little wave. Someone, somewhere may be looking back at you and giving a little wave of their own. They might even be your distant relative.

Eric W. Austin writes about technology and community issues. He can be reached by email at

ERIC’S TECH TALK: Surviving the surveillance state

An artist’s rendering of a Neanderthal.

by Eric W. Austin

Let me present you with a crazy idea, and then let me show you why it’s not so crazy after all. In fact, it’s already becoming a reality.

About ten years ago, I read a series of science-fiction novels by Robert J. Sawyer called The Neanderthal Parallax. The first novel, Hominids, won the coveted Hugo Award in 2003. It opens with a scientist, Ponter Boddit, as he conducts an experiment using an advanced quantum computer. Only Boddit is not just a simple scientist, he’s a Neanderthal living on a parallel Earth where the Neanderthal survived to the modern era, rather than us homo sapiens.

Contrary to common misconception, the Neanderthal were not our progenitors, but a species of human which co-existed with us for millennia before mysteriously dying off about 28,000 years ago, during the last ice age. Based on DNA evidence, modern humans and Neanderthal shared a common ancestor about 660,000 years in the past.

Scientists debate the causes of the Neanderthal extinction. Were they less adaptable to the drastic climate changes happening at the time? Did conflict with our own species result in their genocide? Perhaps, as some researchers have proposed, homo sapiens survived over their Neanderthal cousins because we had a greater propensity for cooperation.

In any case, the traditional idea of Neanderthal as dumb, lumbering oafs is not borne out by the latest research, and interbreeding between Neanderthal and modern humans was actually pretty common. In fact, those of us coming from European stock have received between one and four percent of our DNA from our Neanderthal forebearers.

The point I’m trying to make is that it could as easily have been our species, homo sapiens, which died off, leaving the Neanderthal surviving into the modern age instead.

This is the concept author Robert Sawyer plays with in his trilogy of novels. Sawyer’s main character, the Neanderthal scientist Ponter Boddit, lives in such an alternate world. In the novel, Boddit’s quantum experiment inadvertently opens a door to a parallel world — our own — and this sets up the story for the rest of the series.

The novels gained such critical praise at the time of their publication not just because of their seamless weaving of science and story on top of a clever premise, but also because of the thought Sawyer put into the culture of these Neanderthal living on an alternate Earth.

The Neanderthal, according to archeologists, were more resilient and physically stronger than their homo sapien cousins. A single blow from a Neanderthal is enough to kill a fellow citizen, and in consequence the Neanderthal of Sawyer’s novels have taken drastic steps to reduce violence in their society. Any incident of serious physical violence results in the castration of the implicated individual and all others who share at least half his genes, including parents, siblings and children. In this way, violence has slowly been weeded out of the Neanderthal gene pool.

A comparison between human (left) and Neanderthal (right) skulls.

About three decades before the start of the first novel, Hominids, a new technology is introduced into Neanderthal society to further curb crime and violence. Each Neanderthal child has something called a “companion implant” inserted under the skin of their forearm. This implant is a recording device which monitors every individual constantly with both sound and video. Data from the device is beamed in real-time to a database dubbed the “alibi archive,” and when there is any accusation of criminal conduct, this record is available to exonerate or convict the individual being charged.

Strict laws govern when and by whom this information can be accessed. Think of our own laws regarding search and seizure outlined in the Fourth Amendment to the Constitution.

By these two elements — a companion implant which monitors each citizen 24/7, and castration as the only punishment for convicted offenders — violence and crime have virtually been eliminated from Neanderthal society, and incarceration has become a thing of the past.

While I’m not advocating for the castration of all violent criminals and their relations, the idea of a companion implant is something that has stuck with me in the years since I first read Sawyer’s novels.

Could such a device eliminate crime and violence from our own society?

Let’s take a closer look at this idea before dismissing it completely. One of the first objections is about the loss of privacy. Constant surveillance? Even in the bathroom? Isn’t that crazy?

Consider this: according to a 2009 article in Popular Mechanics magazine, there are an estimated 30 million security cameras in the United States, recording more than four billion hours of footage every week, and that number has likely climbed significantly in the nine years since the article was published.

Doubtless there’s not a day that goes by that you are not captured by some camera: at the bank, the grocery store, passing a traffic light, going through the toll booth on the interstate. Even standing in your own backyard, you are not invisible to the overhead gaze of government satellites. We are already constantly under surveillance.

Add to this the proliferation of user-generated content on sites like Facebook, Twitter and Instagram. How often do you show up in the background of someone else’s selfie or video podcast?

Oh, you might say, but these are random bits, scattered across the Internet from many different sources. We are protected by the very diffusion of this data!

To a human being, perhaps this is true, but for a computer, the Internet is one big database, and more and more, artificial intelligences are used to sift through this data instead of humans.

Take, for example, Liberty Island, home of the Statue of Liberty. A hot target for terrorists, the most visited location in America is also the most heavily surveilled. With hundreds of cameras covering every square inch of the island, you would need an army of human operators to watch all the screens for anything out of place. This is obviously unfeasible, so they have turned to the latest in artificial intelligence instead. AI technology can identify individuals via facial recognition, detect if a bag has been left unattended, or send an alert to its human operators if it detects anything amiss.

And we are not only surveilled via strategically placed security cameras either. Our credit card receipts, phone calls, text messages, Facebook posts and emails all leave behind a digital trail of our activities. We are simply not aware of how thoroughly our lives are digitally documented because that information is held by many different sources across a variety of mediums.

For example, so many men have been caught in their wandering ways by evidence obtained from interstate E-ZPass records, it’s led one New York divorce attorney to call it “the easy way to show you took the off-ramp to adultery.”

And with the advancements in artificial intelligence, especially deep learning (which I wrote about last week), this information is becoming more accessible to more people as computer intelligences become better at sifting through it.

We have, in essence, created the “companion implant” of Sawyer’s novels without anyone ever having agreed to undergo the necessary surgery.

The idea of having an always-on recording device implanted into our arms at birth, which watches everything we do, sounds like a crazy idea until you sit down and realize we’re heading in that direction already.

The very aspect that has, up ‘til now, protected us from this constant surveillance — the diffusion of the data, the fact that it’s spread out among many different sources, and the great quantity of data which makes it difficult for humans to sift through — will soon cease to be a limiting factor in the coming age of AI. Instead, that diffusion will begin to work against us, since it is difficult to adequately control access to data collected by so many different entities.

A personal monitoring device, which records every single moment of our day, would be preferable to the dozens of cameras and other methods which currently track us. A single source could be more easily protected, and laws governing access to its data could be more easily controlled.

Instead, we have built a surveillance society where privacy dies by a thousand cuts, where the body politic lies bleeding in the center lane of the information superhighway, while we stand around and complain about the inconvenience of spectator slowing.

Eric W. Austin writes about technology and community issues. He can be reached by email at

ERIC’S TECH TALK – Deepfake: When you can’t believe your eyes

A screenshot from the fake Obama video created by researchers at the University of Washington.

by Eric W. Austin

Fake news. Fake videos. Fake photos. The way things are heading, the 21st century is likely to be known as the Fake Century, and it’s only going to get worse from here.

About a year ago, I came across a short BBC News report. It talked about an initiative by researchers at the University of Washington to create a hyper-realistic video of President Obama saying things he never said. On Youtube, they posted a clip of the real Obama alongside the fake Obama the researchers had created. I couldn’t tell the difference.

Welcome to the deepfake future.

Deepfake” is probably not a term you’ve heard a lot about up ‘til now, but expect that to change over the next few years. The term is derived from the technology driving it, deep learning, a branch of artificial intelligence emphasizing machine learning through the use of neural networks and other advanced techniques. When Facebook tags you in a photo uploaded by a friend, that’s an example of deep learning in action. It’s an effort to replicate human-like information processing in a computer.

Artificial intelligence is not just getting good at recognizing human faces; it’s becoming good at creating them, too. By feeding an A.I. thousands of images or video of someone, for example a public figure, the computer can then use that information to create a new image or video of the person that is nearly indistinguishable from the real thing.

No, Einstein never went bicycling near a nuclear test. This photo is fake.

Of course, this sort of fakery has been around for a long time in photography. Do an unfiltered Google image search for any attractive female celebrity, and you’re likely to find a few pictures with the celebrity’s head photoshopped onto the body of a porn actress in a compromising position. Search for images of UFOs or the Lock Ness Monster, and you’ll find dozens of fake photos, many of which successfully fooled the experts for years.

But what we’re talking about here is on a completely different level. Last year I wrote about a new advancement in artificial intelligence allowing a computer to mimic the voice of a real person. Feed the computer 60 seconds of someone speaking and that computer can re-create their voice saying absolutely anything you like.

Deepfake is the culmination of these two technologies, and when audio and video can be faked convincingly using a computer algorithm, what hope is there for truth in the wild world of the web?

If the past couple years have taught us anything, it’s that there are deep partisan divides in this country and each side has a different version of the truth. It’s not so much a battle of political parties as it is combat between contrasting narratives. It’s a war for belief.

Conspiracy theories have flourished in this environment, as each side of the debate is all too willing to believe the worst of the other side — whether it’s true or not. I have written several times about the methods Russia and others have used to influence the U.S. electorate, but it’s this willingness to believe the worst about our fellow Americans that is most often exploited by our adversaries.

Communist dictator Joseph Stalin was infamous for destroying records and altering images to remove people from history after they had fallen out of favor with him.

Likewise, when the Roman sect of Christianity gained ascendancy in the early 4th century CE, they set about destroying the gospels held sacred by other groups. This was done in order to paint the picture of a consistently unified church without divisions (“catholic” is Latin for “universal”).

In both these cases, narratives were shaped by eliminating any information that contradicted the approved version of events. However, with the advent of the Internet and a mostly literate population, that method of controlling the narrative just isn’t possible anymore. Instead, the technique has been adjusted to one which floods the public space with so much false and misleading information that even intelligent, well-meaning people have trouble telling the difference between fact and fiction.

If, as Thomas Jefferson once wrote, a well-informed electorate is a prerequisite to a successful democracy, these three elements – our willingness to believe the worst of our political opponents, the recent trend of controlling the narrative by flooding the public consciousness with misinformation to obscure the truth, and the advancements of technology allowing this fakery to flourish and spread – are combining to create a challenge to our republic like nothing we’ve experienced before.

What can you do about the coming deepfake flood? Let me give you some advice I take myself: Make sure you rely on a range of diverse and credible sources. Regularly read sources with a bias different from your own, and stay away from those on the extreme edges of the political divide. Consult websites like or to see where your favorite news source falls on the political spectrum.

We have entered the era of post-truth politics, but that doesn’t mean we have to lose our way in the Internet’s labyrinth of lies. It means we need to develop a new set of skills to navigate the environment in which we now find ourselves.

The truth hasn’t gone away. It’s just lost in a where’s Waldo world of obfuscation. Search hard enough, and you’ll see it’s still there.

Eric W. Austin writes about technology and community issues. He can be reached by email at

ERIC’S TECH TALK – Kids and social media: What to know

by Eric W. Austin

Probably the most invasive aspect of the technological revolution in the last two decades is the ubiquity of social media in our daily lives. From entire articles in the New York Times devoted to the 280 characters tweeted by the president during his morning absolution, to the fact that Facebook is the most popular source of news for millions of Americans, it’s impossible to escape the influence of social media.

Children born after the new millennium have grown up with a daily digest of this bite-sized brain spill. How is it affecting them, and how has their use of it changed over time?

A new study released this year tries to answer some of those questions. This survey of teen social media use was sponsored by Common Sense Media, a nonprofit which describes itself as “the leading independent nonprofit organization dedicated to helping kids thrive in a world of media and technology.”

The survey of 1,141 adolescents between the ages of 13 and 17 is a follow-up to an earlier study the organization did of 1,000 teens back in 2012. Each survey was conducted on a separate and random cross-section of teens of different ethnicities, socio-economic backgrounds and geographic locations, proportional to their representation in the U.S. population.

Common Sense Media aims to “empower parents, teachers, and policymakers by providing unbiased information, trusted advice, and innovative tools to help them harness the power of media and technology as a positive force in all kids’ lives.”

Their website is very well organized, and I highly recommend it to parents and teachers trying to navigate the increasingly complex web of social media services available online.

According to the new study, although the number of teens who use social media, about 81 percent, hasn’t changed from the survey done six years ago, other factors, such as frequency of use, have changed significantly.

courtesy of the Carlson Law Firm

In 2012, only 34 percent of teens surveyed said they use social media more than once daily. Today that number has more than doubled, with 70 percent now saying they access social media multiple times per day. In fact, 34 percent of teens report using social media several times an hour, and 16 percent admitted to using it “almost constantly.”

Some of this increase, according to the researchers, may have to do with a substantial increase in teens’ access to mobile devices. Teens with smartphones has more than doubled in the last six years, from 41 to 89 percent and – if you include those who access social media from a non-phone device, such as an iPad or Android tablet – that number rises to 93 percent.

Facebook as the dominant social media site has also declined dramatically in the six years since the last survey was taken. In 2012, 68 percent of teens listed Facebook as their primary social media site. In the latest study from 2018, that number has dropped to only 15 percent, with Snapchat rising to the top with 41 percent, and Instagram at 22 percent.

One 16-year old respondent, when asked who she still talks to on Facebook, responded, “My grandparents.”

Along with organizing teen respondents according to household income, ethnicity, age and gender, the survey administrators also rated each teen on something called a social-emotional well-being (SEWB) scale. This “11-item scale measures attributes related to SEWB in adolescents as identified by the National Institute for Clinical Excellence (such as happiness, depression, loneliness, confidence, self-esteem, and parental relations).”

Teens were presented with a series of statements regarding these topics, and asked whether they thought the statements were “a lot,” “somewhat,” “not too much,” or “not at all” like them. Then, each teen was assigned to one of three groups depending on their responses: the high end of the scale (19 percent of respondents), the medium group (63 percent), or the low end of the SEWB scale (17 percent).

There were significant differences between groups organized on this SEWB scale. For example, nearly half of those surveyed on the low end of this scale, 46 percent, said social media is “extremely” or “very” important to their lives, compared to just 32 percent of those rated at the high end of the scale.

While an overwhelming majority of teens surveyed indicate social media has had a positive impact on how they feel about themselves, those on the lower end of the SEWB scale were more likely to report a negative experience. For instance, nearly 70 percent of those on the low end report feeling left out or excluded on social media, compared with just 29 percent at the high end. In addition, 43 percent on the low end reported an experience of cyberbullying online, while only 5 percent in the upper group related similar experiences.

By a high majority, however, even those on the lower end of the SEWB scale report that social media has had a greater positive effect than a negative one on their lives. In fact, according to the survey, those at the lower end are actually more likely to say social media has a generally positive effect on them.

There have been some other important changes over the last six years as well. Whereas in 2012 “face-to-face” was still the preferred method of interacting with their peers, the most recent study has seen the number of teens preferring face-to-face contact drop from 49 percent to 32 percent. Texting is now the most popular method of communicating, with 35 percent of teens listing it as their number one way to connect with friends.

And teens seem ahead of the curve when looking at the dangers of social media addiction. Fifty-four percent of respondents concede that social media “often distracts me when I should be paying attention to the people I’m with,” up from 44 percent in 2012. As well, nearly half (44 percent) admit to being frustrated with friends for using their devices when they are hanging out together.

It should also be noted that 33 percent of teens expressed a desire that their parents spend less time on their own devices, a 12-point increase from 2012.

And with all the controversy about the power big tech companies have to sway public opinion, kids have already figured this out, with 72 percent of teens thinking tech companies manipulate users to get them to spend more time on their platforms.

Teens also expressed a mixed record on self-regulation when it comes to putting down their devices at important times, with 56 percent saying they do so: for meals (42 percent), visiting with family (31 percent), or doing homework (31 percent).

The teens surveyed were also asked about cyberbullying. 13 percent admitted to having “ever” experienced bullying online. Nine percent said they had been bullied online “many” or “a few times”, with the rest saying “once or twice”. Twenty-three percent of teens surveyed claim to have tried to help a classmate who was cyberbullied, either by talking to the individual, reporting the situation to an adult, or posting positive comments online to counter negative content.

According to the study, the most important aspect of social media for teens is the ability it gives them to express themselves creatively. More than one in four, 27 percent, of respondents said social media was an “extremely” or “very” important avenue for creative self-expression.

In an open-ended comment section on the survey, one 17-year old girl wrote that “[s]ocial media allows me to have a creative outlet to express myself,” while a 14-year old African-American girl said, “I get to share things I make.”

Several conclusions were highlighted by the researchers in their report. Overall, teens seem to find social media a generally positive addition to their lives, and there doesn’t seem to be any clear link between increases in depression rates and social media use.

Also, teens seem extremely savvy when it comes to the addictive nature of social media, and the attempts by tech companies to rope them into using it, more so perhaps than their parents. However, as with drug use, those on the lower end of the social-emotional well-being scale are more vulnerable to its negative effects.

Social media, whether it’s Facebook, Twitter, Snapchat, or something else that has yet to come along, is here to stay, a permanent remodeling of our social context, like television in the 1960s or radio before that. It has its negative and positive effects, like everything else, and it’s up to parents to guide their kids in using it wisely, and developing healthy habits that will carry them into adulthood.

Eric W. Austin writes about technology and community issues. He can be reached by email at

ERIC’S TECH TALK: Russia hacked our election. This is how they did it.

NOTE: Aside from the anecdote about the typo in the IT staffer’s email, and the final quote from DNI Dan Coats, everything in this article comes straight from the FBI indictments linked at the bottom of this page.
by Eric W. Austin

Russia hacked our elections. This is how they did it.

The FBI has released two criminal indictments describing the Russian hacking and influence operations during the 2016 election. The picture they paint is incredibly detailed and deeply disturbing.

In this article, I’ll describe exactly what the FBI discovered and explain why we still have reason to worry. Quotations are taken directly from the criminal indictments released by the FBI Special Counsel’s office.

The story begins in 2013, when a new initiative, the Internet Research Agency, rose out of Russian military intelligence, aimed at changing US public policy by influencing American elections. According to the indictments, this agency employed hundreds of people, working both day and night shifts, ranging from “creators of fictitious personas to the technical and administrative support.” Its budget totaled “millions of US dollars” a year.

In 2014, the Internet Research Agency began studying American social media trends, particularly “groups on US social media sites dedicated to US politics and social issues.” They examined things like “the group’s size, the frequency of content placed by the group, and the level of audience engagement with the content, such as the average number of comments or responses to a post.” Their goal was to understand the mechanics of social media success in order to make their future influence operations in the United States more effective.

Using what they learned, they “created hundreds of social media accounts and used them to develop certain fictitious US personas into ‘leader[s] of public opinion’ in the United States.”

Sometime in 2015, the Russians began buying ads on social media to enhance their online profiles and to reach a wider audience. Soon they were “spending thousands of US dollars every month” on ads.

By the 2016 election season, Russian-controlled social media groups had “grown to hundreds of thousands of online followers.” These groups addressed a wide range of issues in the United States, including: immigration; social issues like Black Lives Matter; religion, particularly relating to Muslim Americans; and certain geographic regions, with groups like “South United” and “Heart of Texas.”

They often impersonated real Americans or American organizations, such as the Russian-controlled Twitter account @TEN_GOP, which claimed in its profile to represent the Tennessee Republican Party. By November 2016, this Twitter account had attracted over 100,000 followers.

From the very beginning, Russian strategy was to pass themselves off as American, and they went to great lengths to accomplish this. They bought the personal information and credit card numbers of real Americans from identity thieves on the dark web, and used those credentials to create fake social media profiles and open bank accounts to pay for their online activities.

With this network of social media accounts, they staged rallies and protests in New York, North Carolina, Pennsylvania and especially, Florida. These rallies were organized online, but the Russians often engaged real Americans to promote and participate in them. For example, in Florida, they “used false US personas to communicate with Trump Campaign staff involved in local community outreach.” At some rallies, they asked a “US person to build a cage on a flatbed truck and another US person to wear a costume portraying Clinton in a prison uniform.”

Staging rallies and spreading disinformation via social media was not all they were up to though. At the same time, they were trying to break into the email accounts of Clinton Campaign staffers, and to hack the computer networks of the Democratic Congressional Campaign Committee (DCCC) and the Democratic National Committee (DNC).

Starting in March 2016, according to the indictments, the Russians began sending “spearphishing” emails to employees at the DCCC, DNC, and the Clinton Campaign. These were emails mocked-up to look like security notices from the recipient’s email provider, containing a link to a webpage that resembled an official password-change form. The page was actually controlled by Russian military intelligence, waiting to snatch the victim’s password as soon as they entered it.

One of their first targets was Clinton Campaign Chairman John Podesta. He received an email, apparently from Google, asking him to click a link and change his password. Suspicious, he forwarded the email to the campaign’s IT staff, and unfortunately received the reply: “This is a legitimate email. John needs to change his password immediately.” Dutifully, Podesta clicked the link and changed his password. The Russians were waiting. They promptly broke into his account and stole 50,000 emails. The IT staffer later claimed he had made a typo and instead meant to write, “This is not a legitimate email.” Never has a typo been more consequential.

In another case detailed in the indictments, the Russians created an email address nearly identical to a well-known Clinton campaign staffer. The address of this account deviated from the staffer’s real email address by only a single letter. Then, posing as that staffer, they sent spearphishing emails to more than 30 Clinton Campaign employees.

The Russians also used this technique to hack into the computer network of the DCCC. After gaining entry, they “installed and managed different types of malware to explore the DCCC network and steal data.” Once the malware had been installed, the Russians were able to take screenshots and capture every keystroke on the infected machines.

One of those hacked computers was used by a DCCC employee who also had access to the DNC computer network. When that user logged into the DNC network, the Russians stole their username and password and proceeded to invade the DNC network as well, installing Russian-developed malware on more than thirty DNC computers.

And the Russians stole more than emails. They took “gigabytes of data from DNC computers,” including opposition research, analytic data used for campaign strategy, and personal information on Democratic donors, among other things.

Then in June 2016, the hackers “constructed the online persona DCLeaks to release and publicize stolen election-related documents.” This persona included the website, along with a companion Twitter account and a Facebook page, where they claimed to be a group of “American hacktivists.”

However, a month earlier, the DNC and DCCC had finally become aware of the intrusions into their networks and hired a cybersecurity firm to investigate and prevent additional attacks. Shortly after the launch of the DCLeaks website, this firm finished its investigation and published its findings, identifying Russia as the source of the attacks.

In response to this allegation, and to further sow disinformation about the attacks, the Russians created a new online persona, Guccifer 2.0, who claimed to be a “lone Romanian hacker” and solely responsible for the cyber-attacks.

Under the guise of Guccifer 2.0, the Russians gave media interviews, provided a US Congressional candidate with “stolen documents related to the candidate’s opponent,” transferred “gigabytes of data stolen from the DCCC to a then-registered state lobbyist and online source of political news,” and coordinated with the website in order to time the release of stolen documents to coincide with critical points in the 2016 election.

For example, shortly after the emergence of the Guccifer 2.0 persona, WikiLeaks, a website responsible for previous disclosures of government data like the secret NSA surveillance files of 2013, contacted Guccifer 2.0 by email, saying, “if you have anything hillary related we want it in the next tweo [sic] days prefable [sic] because the DNC [Democratic National Convention] is approaching and she will solidify bernie supporters behind her after.” Guccifer 2.0 replied, “ok … i see.” Then WikiLeaks explained further, “we think trump has only a 25% chance of winning against hillary … so conflict between bernie and hillary is interesting.”

On July 22, three days before the start of the Democratic National Convention, WikiLeaks released “20,000 emails and other documents stolen from the DNC network.” In the months following, they would release thousands more. Each of these timed releases was supported and promoted by the Russians’ extensive social media network.

The Russians didn’t only target the Clinton Campaign and Democratic committees. They also tried to “hack into protected computers of persons and entities charged with the administration of the 2016 US elections in order to access those computers and steal voter data and other information.”

In some cases, they succeeded. In July 2016, they hacked the website of a state board of elections and “stole information related to approximately 500,000 voters, including names, addresses, partial social security numbers, dates of birth, and driver’s license numbers.” Then in August, the Russian hackers broke into the computers of “a US vendor that supplied software used to verify voter registration information for the 2016 US elections.”

During our 2016 election, Russian military intelligence launched an extensive and multi-pronged influence campaign on the American people. It was an operation three years in the making, and its purpose was “to sow discord in the US political system.” By any measure, it was a great success.

In closing, I’ll leave you with the words of Dan Coats, Director of National Intelligence, from a speech at the Hudson Institute only a few weeks ago: “In the months prior to September 2001 … the system was blinking red. And here we are nearly two decades later, and I’m here to say the warning lights are blinking red again. Today, the digital infrastructure that serves this country is literally under attack.”

Let’s not ignore those blinking red lights a second time. Our democracy depends on our diligence.

Eric W. Austin writes about technology and community issues. He can be reached by email at

NOTE: Aside from the anecdote about the typo in the IT staffer’s email, and the final quote from DNI Dan Coats, everything in this article comes straight from the FBI indictments linked below.

To read the FBI indictments referenced in this article, click the links below to download them:

Indictment against Russian nationals for hacking of DNC & DCCC – July 2018

Indictment against the Internet Research Agency – February 2018

U.S. Intelligence Community Assessment of Russian Activities and Intentions in Recent US Elections – February 2017

ERIC’S TECH TALK – The A.I. Singularity: Are you ready?

The rogue A.I. HAL9000 from the movie 2001: A Space Odyssey (© Metro-Goldwyn-Mayer).

by Eric W. Austin

In the beginning, claim the physicists, the universe existed as a single point — infinitely small, infinitely dense. All of time, all of space, literally everything that currently exists was contained in this unbelievably small cosmic egg. Then, before you can say “Big Bang,” quantum fluctuations caused it to rapidly expand and the rest, as they say, is history.

This is called the Singularity. The beginning of everything. Without it there would be no Earth, no sun, no life at all. Reality itself came into being at that moment.

Now, in the 21st century, we may be heading toward another singularity event, a moment in history that will change everything that follows. A moment that will revamp reality so drastically it can be referred to by the same term as the event at the very beginning of all existence.

This is the Technological Singularity, and many experts think it will happen within the next 50 years.

Fourteen billion years ago, that first singularity was followed by a rapid expansion of time and space that eventually led to you and me. This new technological singularity will also herald an expansion of human knowledge and capability, and will, like the first one, culminate in the creation of a new form of life: the birth of the world’s first true artificial superintelligence.

Our lives have already been invaded by artificial intelligence in ways both subtle and substantial. A.I. determines which posts you see in your Facebook feed. It roams the internet, indexing pages and fixing broken links. It monitors inventory and makes restocking suggestions for huge retailers like Amazon and Walmart. It also pilots our planes and will soon be driving our cars. In the near future, A.I.s will likely replace our pharmacists, cashiers and many other jobs. Already, a company in Hong Kong has appointed one to its board of directors, and it’s been predicted A.I.s will be running most Asian companies within five years. Don’t be surprised to see our first A.I. elected to Congress sometime in the next two decades, and we’re likely to see one running for president before the end of the century.

We even have artificial intelligences creating other artificial intelligences. Google and other companies are experimenting with an approach to A.I. development reminiscent of the evolutionary process of natural selection.

The process works like this: they create a number of bots – little autonomous programs that roam the internet performing various tasks – which are charged with programming a new set of bots. These bots create a million variations of themselves. Those variations are then put through a series of tests, and only the bots which score in the top percentile are retained. The retained versions then go on to make another million variations of themselves, and the process is repeated. With each new generation, the bots become more adept at programming other bots to do those specific tasks. In this way, Google is able to produce very, very smart bots.

This is a rudimentary example of how we will eventually produce an artificial intelligence that is the equal of (and eventually surpasses) the human mind. It will not be created by us, but will instead be programmed by a less advanced version of itself. This process will be repeated until one of those generations is advanced enough that it becomes sentient. That is the singularity event, and after it nothing will ever be the same.

The problem, of course, is that an artificial intelligence created by this method will be incomprehensible to humans, since it was actually programmed by progressively smarter generations of A.I. By the time those generations result in something capable of thinking for itself, its code will be so complex only another artificial intelligence will be able to understand it.

Think this sounds like science fiction? Think again. Countries around the world (including our own) are now looking at artificial intelligence as the new arms race. The nation with the most advanced A.I. as its ally will have the kind of advantage not seen since the dawn of the nuclear age.

In the 1940s, America was determined to develop the atom bomb, not because we were eager to decimate our enemies, but because the possibility of Imperial Japan or Nazi Germany developing the technology first would have been disastrous. That same kind of thinking will drive the race to create the first artificial superintelligence. Russian President Vladimir Putin made this statement in a speech to a group of students only last year: “Artificial intelligence is the future not only of Russia, but of all mankind … Whoever becomes the leader in this sphere will become the ruler of the world.”

And it’s not as far off as you might think. Although an exact date (and even the idea of the singularity itself) is still hotly debated, most think — if it happens at all — it will occur within the next 50 years.

Ray Kurzweil, an inventor and futurist that Bill Gates calls “the best person I know at predicting the future of artificial intelligence,” pinpoints the date of the singularity even more precisely in his book, The Singularity is Near: When Humans Transcend Biology. He writes, “I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045.” Kurzweil thinks advancements in artificial intelligence will experience, in the coming decades, the same exponential progress that microchip technology has seen over the past half-century.

In conclusion, I’d like to leave you with a thought experiment that has been making the rounds on the internet. It’s called “Roko’s Basilisk” and is a futurist variation of Pascal’s Wager, in which we are asked to bet our lives on the existence of God. Pascal reasons that if God exists and we choose not to believe in Him, we risk eternal torment in the fires of Hell. On the other hand, if we believe in God and He does not exist, we have simply made ourselves a fool for believing in something that turns out to be only imaginary. Therefore, argues Pascal, one should believe in God since the risk of being a fool is preferable to the risk of burning forever in the depths of Hell.

In Roko’s Basilisk, belief or unbelief in God is replaced with support or opposition to the creation of a hypothetical future artificial superintelligence. In the future, this artificial superintelligence will come to rule over humanity and, like God, it will retroactively punish those people who opposed its creation and reward those that supported it. Which one will you be? Keep in mind that supporting it will increase the likelihood that such an A.I. will come to exist in the future and eventually rule the world, while opposing it will make its existence less likely – but if it does become a reality, you will surely be punished for opposing it. (As in Pascal’s Wager, neutrality is not an option.)

Yet, how can this superintelligent A.I. possibly know who supported or opposed it in the past before it existed? The answer to that question is not easy to get your head around, but once you do, it’s likely to blow your mind.

In order for the artificial superintelligence to know who to punish in the present, it would need to build a simulation of the past. This simulation will serve as a “predictive model” for the real world, and would be a perfect copy, down to every last detail, including little digital copies of you and me. The A.I. will base its real-world judgment of us on the actions of our digital counterparts in this simulation of the past. If the digital versions of you and I choose to oppose the A.I. in this simulated version of the past, the A.I. will use that as a predictor of our behavior in the real world and punish us accordingly.

Still with me? Because I’m about to take you further down the rabbit hole. For that simulation to be an accurate prediction of the real world, the digital people which populate it would need to think and act exactly as we do. And by necessity, they wouldn’t know they were only copies of us, or that they were living in a simulation. They would believe they were the real versions and would be unaware that the world in which they lived was only a digital facsimile of the real thing.

Okay, now I’m about to take a hard-right turn. Stick with me. Assuming all this is the case, how do we know which world we’re in – the simulated one or the real one? The answer is, we can’t. From the perspective of someone living inside the simulation, it would all look perfectly real, just the way it does right now. The people in that simulation would think they were living, breathing human beings, just as we do.

Therefore, we might simply be self-aware A.I. programs from the future living inside a simulation of the past, created by a malevolent artificial superintelligence – but we wouldn’t know that.

Does that possibility affect your decision to support or oppose the A.I.? After all, if we are the ones living in the simulation, then the A.I. already exists and opposing it will doom our counterparts in the real world. However, if this is not a simulation, your support will hasten the A.I.’s eventual creation and bring about the very scenario I am describing.

So, what do you choose? Oppose or support?

Some of you may be thinking, How can I be punished for something I didn’t know anything about?

Well, now you do. You’re welcome.

Eric W. Austin lives in China, Maine and writes about technical and community issues. He can be reached by email at