ERIC’S TECH TALK: What is “intelligence” in the age of A.I.?

by Eric W. Austin

In an age where artificial intelligence (AI) increasingly integrates into our daily lives, from virtual assistants to autonomous vehicles, the question of what constitutes ‘intelligence’ becomes increasingly relevant. Is it the ability to solve complex problems, the capacity for creative thought, or something more elusive?

When we witness machines performing tasks once thought exclusive to humans, it challenges our traditional understanding of intelligence. This leads us to ponder: can machines truly ‘think’, or are they merely simulating a facet of human cognition? As we venture deeper into this era of advanced AI, it’s crucial to explore not only how we define intelligence but also the implications of attributing such a quality to machines. This exploration raises profound questions about our perceptions of intelligence and the potential risks of misjudging the line between genuine cognitive abilities on one hand, and sophisticated programming on the other.

Defining intelligence has long been a subject of debate among psychologists, neuroscientists, and AI researchers. Intelligence in humans is often gauged by the ability to learn, adapt to new situations, understand complex concepts, and apply logical reasoning. Psychological assessments, like IQ tests, attempt to quantify these abilities, though they remain subject to debate regarding their comprehensiveness and bias. In the realm of AI, intelligence takes on a different hue – it’s about the ability of machines to perform tasks that would typically require human intelligence. This includes pattern recognition, language understanding, and decision-making. However, this technological mimicry begs the question: does the replication of human-like problem-solving denote true intelligence, or is it merely a sophisticated imitation? The distinction is crucial, as it shapes our understanding of AI’s role and potential in our society.

Consider a straightforward scenario: a man is tasked with delivering boxes from Point A to Point B. Upon arrival, his job involves carrying the boxes up a flight of stairs to a designated spot. This task, while simple, is generally perceived as one carried out by an intelligent agent – a human. Now, imagine automating a part of this process: at the destination, a conveyor belt is introduced. Instead of manually carrying the boxes upstairs, the man places them on the conveyor belt, which completes the task. Although the end result is the same, attributing intelligence to the conveyor belt seems illogical. This suggests that the mere completion of a task, initially performed by a human, doesn’t inherently transfer the attribute of intelligence to the machine.

On the other hand, consider a task undeniably associated with human intelligence: writing a poem in iambic pentameter. This creative endeavor requires not just linguistic skill but also emotional depth, creativity, and an understanding of complex literary techniques. If we were to replace the human poet with a machine capable of crafting a comparable poem, does the achievement of this task by the machine signify intelligence? This juxtaposition of tasks, from the mundane to the highly complex, raises a pivotal question: Is the distinction we draw between these tasks in terms of intelligence merely a matter of their complexity, or is there a deeper criterion at play in our perception of what constitutes ‘intelligent’ action?

As we turn our gaze to the present capabilities of AI, we find a landscape teeming with advancements that once bordered on the realms of science fiction. Today’s AI systems can diagnose diseases, translate languages in real-time, create stunning artwork, and even write articles like this one. The complexity and sophistication of these tasks are escalating rapidly, pushing the boundaries of what we thought machines could achieve. Does the ability of AI to perform these complex tasks equate to intelligence? Or are these systems still operating within the realm of advanced algorithms and data processing, lacking the essence of true cognitive understanding?

The concept of Artificial General Intelligence (AGI) takes this discussion a step further. AGI refers to a machine’s ability to understand, learn, and apply its intelligence to a wide range of problems, much like a human brain. Unlike specialized AI designed for specific tasks, AGI embodies the flexibility and adaptability of human intellect. The pursuit of AGI raises profound questions: If a machine can mimic the broad cognitive abilities of a human, does it then possess ‘intelligence’ in the true sense? And how do we reconcile this with our earlier distinctions between simple automation and genuine intelligence?

As AI continues to evolve and blur the lines between programmed efficiency and cognitive ability, we find ourselves grappling with the very nature of intelligence. From the simplicity of a conveyor belt to the complex potential of AGI, our understanding of intelligence is continually being challenged and redefined. This article does not seek to provide definitive answers but rather to provoke reflection on what it means to be intelligent, both in humans and machines.

As we witness the rapid advancement of AI technologies, the question remains open for interpretation and contemplation: What truly defines intelligence, and how close are we to witnessing its embodiment in the machines of tomorrow?

Will A.I. break the future? (TLDR: yes, yes it will)

This picture was generated with the MidJourney AI image creation tool, using the following description: “A 16:9 image of a futuristic city in ruins, with skyscrapers crumbling and fires burning. The sky is bright and orange, with clouds of smoke and dust. In the center of the image, a giant metal robot stands on the ground, holding a gun in its right hand and a sword in its left. The robot is sleek and shiny, with wires and antennas on its head. It has a large eye-like lens in the middle of its face, and a mouth-like grill below it. It is the AI that has caused the apocalypse, and it is looking for any survivors to eliminate. The image is at a high resolution of 5K, with sharp details and vivid colors. The image conveys a mood of horror and despair, as the AI dominates the world and destroys humanity.”

by Eric W. Austin

Can machines think? How do we even define thinking, and how can we recognize it in anyone other than ourselves? These questions are becoming increasingly important as recent advancements in artificial intelligence blur the line between human cognition and digital simulation. The progress made in language processing through the use of neural networks, which simulate the functioning of a human brain, has yielded results that have surprised many, including myself. In this article, I will share my experience playing with two of these new tools over the past several weeks.

On the most basic level, the two major breakthroughs in AI fall into two broad categories: textual processing and image generation. New tools and apps that employ this new technology are popping up in different variations everywhere, so there are many different tools to try out, but this article is based on my experience with two of the most popular: Chat-GPT and MidJourney AI.

First, let’s discuss the technology behind these advances in artificial intelligence and why they represent such a significant leap forward in achieving human-like responses from a computer. Traditional AI was primarily rules-based, with programmers coding specific instructions for situations the computer might encounter. While effective, this approach was limited by the programmer’s foresight. If the computer encountered an unplanned situation, it would be at a loss. Rules-based AI worked well for simple cause-and-effect relationships between human actions and computer responses, but programming a computer to perform more subtle tasks, such as distinguishing between the images of a dog and a cat, or writing an original essay, proved to be more challenging.

Rather than attempting to develop a program capable of handling every conceivable situation, researchers focused instead on creating learning machines inspired by the human mind and utilizing statistical modeling techniques. Terminology used to describe this technology often reflects the human physiology that inspired it. This new AI is constructed from layers of neurons that transmit information from one layer to the next, while applying sophisticated statistical modeling in order to “understand” the data it is processing.

Connections between these artificial neurons, similar to the connections between neurons in the human brain, are referred to as “synapses” and determine the flow of information within the neural network. Each neuron can be thought of as a mathematical function or algorithm that processes input data, performs a calculation, and generates a result. This output is then used as input for other neurons, which carry out additional calculations, applies further processing and passes the results along to subsequent neurons in lower-level layers.

This process converts any kind of digital input provided into numerical data that the computer can understand. The kind of input data can be diverse, ranging from a sentence in English to a picture of a squirrel. Once the input has been transformed into millions of computational data points, the AI utilizes concepts from statistical modeling to identify patterns within the result. Returning to our earlier example of distinguishing dogs, developers feed thousands of dog images into the computer. The AI then analyzes each image and detects patterns the images have in common. With feedback from developers on correct and incorrect outputs, the artificial intelligence can adjust its own models to improve results without requiring additional human programming. This process of providing data to the AI in order to improve its modeling is known as training. Once the AI has been trained to recognize specific patterns common to dog images, it can successfully identify previously unseen pictures of dogs.

Since the AI has been trained to recognize patterns and features specific to dog images, it can then utilize this knowledge to create new, original images of dogs. This is achieved through a process called generative modeling. The AI essentially learns the underlying structure and distribution of the data it has been trained on, which allows it to generate new data samples that share similar characteristics. In the case of dog images, the AI has learned the various features, such as shapes, colors, and textures, that are commonly found in pictures of dogs. By combining and manipulating these features, the AI can generate entirely new images of dogs that, while unique, still resemble real dogs in appearance. This creative capability has numerous applications, only some of which are we starting to understand or apply.

The same modeling techniques used for processing images of dogs can be applied to understanding language or analyzing other complex input. When applied on a large scale, the AI can recognize a wider variety of objects and interpret intricate input data. For instance, instead of merely examining pictures of dogs, the AI can analyze entire movies, identifying all the objects in each scene. Furthermore, it can detect patterns not only in the objects themselves but also in the relationships between them. In this manner, the AI can begin to understand the world in ways that mirror our own.

Since the AI perceives language and image information as interchangeable forms of data, it can translate textual descriptions into image data and vice versa. This means that users can describe a scene to an AI in natural language, and it can then generate an image based on their specifications. This capability has led to the development of various tools for creating artwork, of which MidJourney and Leonardo AI are two of the most popular and advanced. For example, I used the MidJourney AI tool to create the accompanying original image of a rogue robot standing in the rubble of an American city. See the caption on the photo to see what description I used to generate the image.

Unlike MidJourney, which generates images, Chat-GPT focuses on producing text. At its core, it shares similarities with the predictive text functionality found on modern smartphones. As users begin typing messages on their devices, the interface attempts to predict the intended word to speed up typing. Chat-GPT operates on a similar principle, but with far greater complexity and sophistication. Instead of merely suggesting individual words, Chat-GPT is capable of creating entire sentences and paragraphs. The applications for this are almost limitless. It can engage in knowledgeable conversations on almost any subject in a manner that is strikingly human-like and contextually responsive. It can compose essays with minimal input. In the business world, this technology has the potential to replace customer support representatives, secretaries, and any job requiring language understanding and interpretation, an area that has long eluded automation efforts.

This article has only provided a glimpse into the intricate and expansive world of this new technology. Numerous topics remain unexplored, such as the countless applications this technology can be applied to, the ethical and legal ramifications, and the potential issues surrounding AI bias or manipulation. A more in-depth discussion of these aspects will be reserved for future articles. Nevertheless, it is evident that this technology will transform our world in ways we cannot yet fathom, and it will likely do so in the very near future.

The personal computer took 20 years to revolutionize our lives, while the smartphone achieved a similar impact in under a decade. These latest advancements in artificial intelligence are poised to bring about even more profound changes in a much shorter timeframe – possibly within a few years or even months. So, buckle up, my friends, the AI apocalypse has arrived!

Eric W. Austin writes about technology and community topics. Contact him by email at ericwaustin@gmail.com.

ERIC’S TECH TALK: How the internet tricked my mom

Screenshot of actual text message that tricked my mom.

by Eric W. Austin

Well, my mother got scammed on the internet, again. Last week, she received a text on her phone claiming to be from the shipping company UPS. The text message said they “were unable to complete your delivery due [to] incomplete address,” and included a website link for her to schedule a new delivery. The link took her to a website with the UPS logo and asked her to enter her credit card information to pay for a $1.14 “redelivery fee”.

When she told me about it later in the day, I immediately found the incident suspicious. I receive packages from UPS all the time and have never been required to pay for a redelivery. She also told me she got the text at 4 a.m., but who is doing deliveries at that time of day? I asked her to show me the text message. It came from an “unknown” number, and the link they provided was a shortcut — a link designed to redirect someone from a short URL to a longer, more complex address. This one started with “bit.ly”, which is a common provider of URL shortcuts. That doesn’t mean that any similar link is automatically suspicious, since there are many credible people and organizations who use this service to shorten links shared on social media, but scammers will use this method to disguise the fact that they are sending you to an illegitimate website.

On her phone, this link had sent her to a webpage with an address beginning with “www45.”. I was not able to discover exactly what this prefix means, but the first Google result referencing a similar address came from a user complaining about getting a virus from it.

When I forwarded the text message to my own computer and opened the link in my browser, it did not take me to a faux UPS website, as it had on her phone, but instead opened to a different random website each time I clicked on it, which my browser’s anti-malware security software automatically killed as a safety precaution before I could even view its content. I believe the link in the text message was programmed only to open to the fake UPS site when launched on a smart phone, because that was the platform they were targeting. (It should be noted, I emphatically do not recommend anyone click on such a link, as it could potentially install a virus on your computer, but I was curious about where it would take me and I have precautions installed on my PC and know how to deal with a virus if I get one. For everyone else: never click on a suspicious link!)

Although I couldn’t find an exact match to this scam on the official UPS website, they did acknowledge awareness of similar scams on their FAQ page.

Based on this brief analysis, I think there is no question that this text was sent to my mother by a scammer and it was not actually from UPS about a package delivery. We called her bank and canceled her credit card. A new card should arrive in a few weeks and, according to the bank, no unauthorized charges had been made on her account. It’s inconvenient but no lasting harm was done.

But why did my mom fall for it? She’s a smart lady and is well-aware of the prevalence of scammers who frequently prey on senior citizens like her. Part of the reason, I think, is the fact that she was expecting a package and that delivery was late. “How did they know I was expecting a package?” she asked incredulously when I told her I thought she had been the victim of a scam.

And I think this reaction is the key to why she was duped. She was expecting a package, it was late, and the text seemed to fit into the pattern she was expecting to see. How did the scammer know she was expecting a delivery? Did they steal her order information from Amazon or UPS? I recommended she change her Amazon password just in case, but I’m not sure the scammer had any special knowledge about her ordering habits.

We live in the age of Amazon and other online retailers. In any given week, I am probably expecting a package. We don’t realize just how often most of us regularly receive items through the mail. Something that was fairly rare two decades ago has become a commonplace occurrence today. I suspect this scammer sent a similar text message to thousands (maybe millions?) of people, and (I’m guessing here) maybe 80 percent of them are anticipating the receipt of a package from somewhere at some point during the week. Although it’s possible the scammer hacked Amazon or the UPS website and stole my mother’s information as part of an effort to target her, I think it’s more likely they just got lucky in the timing of their text message.

Hopefully, this article can serve as a reminder to everyone to be aware of such predatory behavior. Seniors seem to be especially targeted by these scammers. My mother frequently receives phone calls on her landline from people who claim to be one of her grandchildren and in desperate need of cash. She’s learned not to trust such calls. Now, she will be wary of suspicious texts too. If you are one of these older folks, be suspicious! Ask your kids for advice if you have a concern. If you are a younger person, look out for your parents and grandparents. Speak to them about these issues and caution them to be watchful.

And it’s always a good practice to avoid clicking on links in emails or text messages unless you are certain the source is trustworthy.

Email the author at ericwaustin@gmail.com.

ERIC’S TECH TALK: Communication is the secret sauce of social change

A mosaic depicting Alexander the Great in battle, discovered in the city of Pompeii, superimposed with the face of Mark Zuckerberg.

by Eric W. Austin

There is something in the philosophy of history variously called the Whig interpretation of history, Whig historiography, or just Whig history. It’s a view that sees the historical record as an inexorable push toward greater progress and civilization. In this view of the past, society is on a continuous path from savagery to civility, constantly improving, becoming freer, always taking two steps forward for any regrettable step back.

This idea gained popularity during the 18th century Enlightenment and was epitomized in the writings of German philosopher Georg Wilhelm Friedrich Hegel (1770–1831) and in works such as The History of the Decline and Fall of the Roman Empire by English historian Edward Gibbon, published in six volumes between 1776 and 1789. Other Enlightenment thinkers, like David Hume, criticized the approach, and it lost some favor in the aftermath of the horrors of World War I and II, but the Whig view of history is still held by many people today, even if they may not be aware of its history or what to call it. There is an almost intuitive acceptance of the idea in modern culture.

As a social philosophy, it served as a driving force in the civil rights movement of the 1960s, expressed most eloquently in a 1968 speech at the Washington National Cathedral by Martin Luther King, Jr., where he said, “We shall overcome because the arc of the moral universe is long but it bends toward justice.” More recently, former President Barack Obama alluded to this sentiment after the 2016 election, saying, “You know, the path this country has taken has never been a straight line. We zig and zag and sometimes we move in ways that some people think is forward and others think is moving back. And that’s OK.” We may zig and zag but, ultimately, we are moving forward.

I have long been fascinated with this idea of history as a progression, ultimately, toward improvement. There is something comforting about it, something hopeful. And something obvious too. In our modern world where technology is constantly improving and offering us additional benefits, it’s easy to fall into thinking that continuous progress is part of some immutable law of nature, that progress is inevitable.

In recent years, however, I have grown more skeptical of the idea. For one thing, we have to ask: progress for whom? We generally judge outcomes based on our own present circumstances — in other words, we see our history as “progress” because we are the outcome of that history. We are the product of a cultural progression that produced us. The winners write the history, and their descendants read that history and deem it “progress”. But was it progress from the perspective of the Native American tribes that were wiped out by the coming of Europeans? Did Christianity represent progress for the pagans of the 4th century Roman Empire who were watching their traditions being replaced and superseded by a new religion? We tend to view the past as progress because we are the end products of the winning side. A natural bias, perhaps. The more serious error comes when we use this view of the past to make assumptions about the future.

Often social change is driven by technological innovations, particularly advances in how we communicate. Think about the invention of writing as one of those advancements that transformed, over a thousand years, oral societies into written ones. We take writing for granted today, but at the time it was revolutionary. No longer did you need to trust someone else’s recollection of past events. Now you had a written record, essentially immutable and unchangeable, at least in theory. Agreements could be written down and later referred to as a way to settle disputes. History could be recorded and preserved for future generations.

Writing brought many benefits to society. Most importantly, the ability to reliably preserve knowledge allowed subsequent generations to more easily build on the progress of past generations. But writing also introduced new conflicts about who would control how that information was preserved. In many ways, writing imposed new cultural restrictions on the ordinary person who had grown up in an oral society. There was now an official version of a story, and any interpretation that differed from it could be judged “wrong”. Control over the historical narrative was now dictated by an elite group with the specialized skills required to read and write. Writing made culture more transportable, but it also made culture easier to police. Writing introduced new cultural gatekeepers and also new conflicts.

The Bible tells the story of the Tower of Babel (Genesis 11:1–9), in which an early society comes together to build a tower to reach the heavens. Seeing this act as the height of arrogance, God strikes the people with a confusion of languages, confounding their undertaking and, unable any longer to communicate, they scatter across the earth. While the story is probably an origin myth meant to explain why various peoples speak different languages, it contains an important truth about the power of communication in human endeavors.

The conquests of Alexander the Great in the 4th century BCE serve as a foil to the story of the Tower of Babel and illustrates how fundamental communication is to the evolution of culture. Alexander was the ruler of Macedon, a kingdom located north of the Greek peninsula. Although there was debate even at the time about whether Macedonians were considered Greek, there is no question that Alexander was a devotee of Greek culture. Influenced by his tutor, the famous Greek philosopher Aristotle, Alexander sought to spread Greek culture in the lands he conquered. By the time of his death in 323 BCE at the age of 32, his empire was one of the largest in history and included Greece, the Middle East, Northern Africa (Egypt), and stretched as far east as India.

A map of the territory conquered by Alexander the Great. (photo credit: Encyclopedia Britannica)

But Alexander was not just a conqueror of territory, he was also a cultural evangelist. He was, by some reasonable estimates, the most influential figure in the history of Western civilization. During his short, 13-year military career, he founded dozens of cities (many named after himself) in the style of the Greek polis, or city-state of Ancient Greece. Most importantly, because of his influence, the Greek language became the lingua franca – the common language – for the entire region. Alexander the Great is the reason the New Testament was written in Greek. What God had torn asunder at the Tower of Babel, Alexander put back together again.

It’s important to note that while we may see this as progress now, and one of the foundational periods in the development of Western civilization, it was also an incredibly destructive process for the societies going through it. Greek culture replaced, or in many cases, merged with the existing native cultures to create a hybridized version in a process referred to by historians as Hellenization.

Rome later built upon the foundations that Alexander had laid down, although Roman culture was more about assimilation than innovation. Rome built the infrastructure, and through the Pax Romana (“Roman Peace”) created the stability that allowed Greek culture and ideas to flourish and spread in the centuries following Alexander’s conquests. Not only were Rome’s famous roads essential to the flow of goods throughout the empire, but also ideas, and ideas are the seeds of culture.

Aside from the invention of writing and the conquests of Alexander, the next most consequential advancement in human communication came in 1436 with the invention of the printing press. This changed the communication game in significant ways and kicked off a knowledge revolution that would lead to the Renaissance, the Protestant Reformation and eventually the Enlightenment, which introduced many of the ideas that have come to define modern society, including the scientific method of investigating the natural world and the “rights of man” which were enshrined in the American Constitution and the Bill of Rights.

An artist’s rendering of Johannes Gutenberg in his workshop.

By removing the human element from the copying process, the printing press both increased the accuracy of shared information and reduced its cost. As the cost of reproduction dropped, the written word became accessible to more ordinary people, which encouraged the spread of literacy in the general population. Ultimately, this led to the Protestant Reformation, with a large number of Christians breaking from the Roman Catholic Church. Christians could now read the Bible for themselves and no longer had to rely on those with special access to the written word for their interpretation. Martin Luther, the father of the Reformation, is alleged to have quipped, “Printing is the ultimate gift of God and the greatest one.”

The printing press removed many of the obstacles between the ordinary person and the written word and resulted in a proliferation of ideas, both good and bad. The witch hunting craze of the 16th and 17th centuries, during which an estimated 50,000 people, mostly older women, were executed on suspicion of practicing witchcraft, was in part fueled by the printing and widespread availability of one book, the Malleus Maleficarum, roughly translated as the Hammer of Witches, published in 1468 by two Catholic clergyman, Heinrich Kramer and Johann Sprenger. The book purported to teach readers how to identify a witch and turned many ordinary people into demonic detectives. The result: witch hunting hysteria. It’s hard to see this as anything other than a phenomenon inspired by the spread of literacy, combined with a highly-charged religious environment, in the decades after the introduction of the printing press.

Whether we’re talking about Roman roads, the printing press, or more recent inventions like the telephone, radio, television or the internet, social change is usually preceded by advancements in communication technology. But these advancements have often been a double-edged sword and are frequently accompanied by periods of heightened conflict, and an increased propensity for hysterical thinking in the general public. We treasure the opening words of the Declaration of Independence, but we can’t forget the brutality of the French Revolution, even though both were inspired by similar cultural ideals.

There are many parallels between the impact of the printing press on society and what we are seeing today with the internet. Like the printing press, the internet has eliminated obstacles between information and the average consumer. And like every other time this has happened, it’s leading to social upheaval as people adjust to the new information landscape. As in the past, people are asking, is this a good or a bad thing? Does this make society better or worse?

On one hand, the internet empowers those who previously had no power. It provides a platform for those who before had no voice. But, on the other hand, it enables the digital equivalent of witch burnings. Good information has never been so accessible, but wild theories also proliferate online and influence how people vote, how they make health decisions, and who they love or hate. People have access to all the information in the world, but do they have the wisdom to discern the good from the bad?

Is this what progress feels like? Do we zig zag through history but always move forward? Does giving people more access to information always benefit society? These are some of the questions that have been bouncing around my head in recent years. Will people 200 years from now look back on the social changes we are going through today and see it as progress? I think they will, but not because history inevitably marches towards something we can objectively label as “progress”. It will be because they are the end products of the cultural conflicts we are living through right now, and viewed from the destination, whatever path history takes you down will look like progress to those at the end of the race.

Contact the author at ericwaustin@gmail.com.

ERIC’S TECH TALK – My life in video games: a trip through gaming history

King’s Quest III: To Heir is Human (1986)

by Eric W. Austin

It was sometime in the mid-1980s when my father took me to a technology expo here in Maine. I think it was held in Lewiston, but it might have been some other place. (Before I got my driver’s license, I didn’t know where anything was.) This was at a time when you couldn’t buy a computer down at the local department store. You had to go to a specialty shop (of which there were few) or order the parts you needed through the mail. Or you could go to a local technology expo like we were doing.

They didn’t have fancy gadgets or shiny screens on display like you might see today. No, this was the age of hobbyists, who built their own computers at home. It was very much a DIY computer culture. As we walked through the expo, we passed booths selling hard drives and circuit boards. For a twelve-year-old kid, it wasn’t very exciting stuff. But then we passed a booth with a pile of videogames and my interest immediately piqued.

My father didn’t have much respect for computer games. Computers were for work in his view. Spreadsheets and taxes. Databases and word processing. But I was there for the games.

I dug through the bin of budget games and pulled out the box for a game called King’s Quest III: To Heir is Human. The game was released in 1986, so the expo must have taken place a year or two after that. The King’s Quest games were a popular series of adventure games released by the now defunct developer, Sierra On-Line.

Somehow I convinced my father to buy it for me, but when I got home I found to my disappointment that it was the PC DOS version of the game and would not play on my Apple II computer. I never did get a chance to play To Heir is Human (still one of the cleverest titles for a game ever!), but I never lost my fascination with the digital interactive experience of videogames.

The first videogames did not even involve video graphics. They were text adventure games. I remember playing The Hitchhikers Guide to the Galaxy Game (first released in 1984), which was a text adventure game based on the book series of the same name by Douglas Adams, on a PC in the computer lab at Winslow High School. These games did not have any graphics and everything was conveyed to the player by words on the screen. You would type simple commands like “look north” and the game would tell you there was a road leading away from you in that direction. Then you would type “go north” and it would describe a new scene. These games were like choose-your-own-adventure novels, but with infinitely more possibilities and endless fun. Who knew a bath towel could save your house from destruction or that you could translate alien languages by sticking a fish in your ear?

Wizardry VI: Bane of the Cosmic Forge (1990)

One of my first indelible gaming experiences was playing Wizardry VI: Bane of the Cosmic Forge (released 1990) in my father’s office on a Mac Lisa computer with a six-inch black and white screen. These sorts of games were commonly called “dungeon crawlers” because of their tendency to feature the player exploring an underground, enclosed space, searching for treasure and killing monsters. As was common for the genre at the time, the game worked on a tile-based movement system: press the forward key once, and your character moved forward one space on a grid. The environments for these types of games typically featured a labyrinthine structure, and part of the fun was getting lost. There was no in-game map system, so it was common for players to keep a stack of graph paper and a pencil next to their keyboard. With each step, you would draw a line on the graph paper and using this method you could map out your progress manually for later reference. Some games came with a map of the game world in the box. I remember that Ultima V: Warriors of Destiny (1988) came with a beautiful cloth map, which I thought was the coolest thing ever included with a game.

As I grew up, so did the videogame industry. The graphics improved. The games became more complex. As their audiences matured, games flirted with issues of violence and sexuality. Games like Leisure Suit Larry (1987) pushed into adult territory with raunchy humor and sexual situations, while games like Wolfenstein 3D (1992) had you killing Nazis in an underground bunker in 1940s Germany, depicting violence like never before. These games created quite the controversies in their day from people who saw them as corrupt indicators of coming societal collapse.

Wolfenstein 3D (1992)

In 1996, I bought my first videogame console, the original PlayStation. At the time, Sony was taking a giant gamble, releasing a new console to compete with industry juggernauts like Nintendo and SEGA. The first PlayStation console was the result of a failed joint-effort between Sony and Nintendo to develop a CD-ROM peripheral for the Super Nintendo Entertainment System (SNES), a console released in 1991 in North America. When that deal fell through, Sony decided to develop their own videogame system, and that eventually became the PlayStation.

There was a lot of debate during these years about the best medium for delivering content — solid-state cartridges, which were used for the SNES and the later Nintendo-64 (released 1996), or the new optical CD-ROMs used by Sony’s PlayStation. The cartridges used by Nintendo (and nearly every console released before 1995) featured faster data transfer speeds over optical CDs but had a smaller potential data capacity. Optical media won that debate, as the N-64 was the last major console to use a cartridge-based storage format for its games. Funnily enough, this debate has come full-circle in recent years, with the resurgence of cartridge-based storage solutions like flash drives and solid state hard drives. Those storage limitations of the past have mostly been solved, and solid state memory still offers faster data transfer rates over optical options like CD-ROMs or DVDs (or now Blu-Ray).

Star Wars arcade game (1983)

The PlayStation was also built from the ground up to process the new polygonal-based graphic technology that was becoming popular with computer games, instead of the old sprite-based graphics of the past. This was a graphical shift away from the flat, two dimensional visuals that had been the standard up to that point. This shift was an evolution that had taken place over a number of years. First, there was something called vector graphics, which were basically just line drawings in three-dimensional space. I remember playing a Star Wars arcade game (released 1983) with simple black and white vector graphics down at the arcade that used to be located next to The Landing, in China Village, when I was a kid. The game simulated the assault on the Death Star from the original 1977 movie and featured unique flight-stick controls that were very cool to a young kid who was a fan of the films.

Videogame consoles have changed a lot over the years. My cousin owned an SNES and used to bring it up to my house in the summers to play Contra and Super Mario World. Back then, the big names in the industry were Nintendo, SEGA and Atari. Nintendo is the only company from those days that is still in the console market.

Up until the late 1990s, each console was defined by its own unique library of games, with much of the development happening in-house by the console manufacturers. This has changed over the years so that nearly everything today is made by third-party developers and released on multiple platforms. In the early 2000s, when this trend was really taking off, many people theorized it would spell doom for the videogame console market because it was removing each console’s uniqueness, but that has not turned out to be the case.

Videogames are usually categorized into genres much like books or movies, but the genres which have been most popular have changed drastically over the years. Adventure games, usually focused on puzzles and story, ruled the day in the early 1980s. That gave way to roleplaying games (RPGs) through the mid-’90s, which were basically adventure games with increasingly complex character progression systems. With the release of Wolfenstein 3D from id Software in 1992, the world was introduced to the first person shooter (FPS) genre, which is still one of the most popular game types today.

The “first person perspective,” as this type of game is called in videogame parlance, had previously been used in dungeon crawlers like the Wizardry series (mentioned above) and Eye of the Beholder (1991), but Wolfenstein coupled this perspective with a type of action gameplay that proved immediately popular and enduring. Another game I played that would prove to be influential for these type of games was Marathon, an alien shooter game released in 1994 for the Apple Macintosh and developed by Bungie, a studio that would later go on to create the incredibly popular Halo series of games for Microsoft’s Xbox console.

One of the things I have always loved about videogames is the way the industry never sits still. It’s always pushing the boundaries of the interactive experience. Games are constantly being driven forward by improving technology and innovative developers who are searching for new ways to engage players. It is one of the most dynamic entertainment industries operating today. With virtual reality technology advancing quickly and promising immersive experiences like never before, and creative developers committed to exploring the possibilities of emergent gameplay afforded by more powerful hardware, I’m excited to see where the industry heads in the coming decades. If the last thirty-five years are any indication, it should be awesome!

Eric W. Austin writes about technology and community issues. Contact him by email at ericwaustin@gmail.com.

ERIC’S TECH TALK: CBC wants to revolutionize internet access in China, but will it work?

by Eric W. Austin

The views of the author in the following column are not necessarily those of The Town Line newspaper, its staff and board of directors.

On the ballot this November is a question that has the potential to revolutionize internet access for residents of China. The question is also long, at over 200 words, a bit confusing and filled with legalese. As a resident of China, a technophile, and a reporter for The Town Line newspaper, I wanted to understand this initiative, figure out exactly what it’s attempting to accomplish, and try to find out what residents of China think about the future of local internet access.

In order to understand the issue, I attended two of the recent information sessions held by the China Broadband Committee and also sat down with Tod Detre, a member of the committee, who I peppered with questions to clear up any confusions I had.

I also created a post in the Friends of China Facebook group, which has a membership of more than 4,000 people from the town of China and neighboring communities, asking for comments and concerns from residents about the effort. Along with soliciting comments, I included in my post a survey question asking whether residents support the creation of a fiber optic infrastructure for internet access in China. (I should be clear here and point out that the question on the November ballot does not ask whether we should build a fiber optic network in China, only whether the selectboard should move forward with applying for financing to fund the initiative if they find there is sufficient interest to make the project viable. But for my purposes, I wanted to understand people’s thoughts on the goals of the effort and how they felt about their current internet access.)

My Facebook post garnered 86 comments and 141 votes on the survey question. One hundred and twenty people voted in favor of building a fiber optic network in China and 21 people opposed it. (This, of course, was not a scientifically rigorous survey, and the results are obviously skewed toward those who already have some kind of internet access and regularly utilize online platforms like Facebook.)

Before we get into the reasons why people are for or against the idea, let’s first take a look at what exactly the question on the ballot is and some background on what has led up to this moment.

The question before voters in November does not authorize the creation of a fiber optic network in China. It only authorizes the selectboard to begin the process of pursuing the financing that would be required to accomplish that goal – but only if certain conditions are met. So, what are those conditions? The most important condition is one of participation. Since the Broadband Committee’s goal is to pay for the fiber optic network solely through subscriber fees – without raising local taxes – the number of people who sign up for the new service will be the primary determining factor on whether the project moves forward.

If the question is approved by voters, the town will proceed with applying for financing for the initiative, which is projected to have a total estimated cost of about $6.5 million, paid for by a bond in the amount of $5.6 million, with the remainder covered through a combination of “grants, donations and other sources.” As the financing piece of the project proceeds, Axiom, the company the town plans to partner with to provide the internet service, will begin taking pre-registrations for the program. Although the length of this pre-registration period has not been completely nailed down, it would likely last anywhere from six months to a year while the town applies for financing. During this period, residents would have an opportunity to reserve a spot and indicate their interest in the new service with a refundable deposit of $100, which would then be applied toward their first few months’ of service once the program goes live. Because the plan for the initiative is for it to be paid for by subscriber fees rather than any new taxes, it is essential that the project demonstrates sufficient interest from residents before any work is done or financing acquired.

With approximately 2,300 structures, or households, that could potentially be connected to the service in China, the Broadband Committee estimates that at least 834 participants – or about 36 percent – would need to enroll in the program for it to pay for itself. Any number above this would create surplus revenue for the town, which could be used to pay off the bond sooner, lower taxes, reduce subscriber fees or for other purposes designated by the selectboard. If this number is not reached during the pre-registration period, the project would not proceed.

One of the problems this initiative is meant to alleviate is the cost of installing internet for residents who may not have sufficient internet access currently because bringing high speed cable to their house is cost prohibitive. The Broadband Committee, based on surveys they have conducted over the last several years, estimates that about 70 percent of residents currently have cable internet. The remaining 30 percent have lower speed DSL service or no service at all.

For this reason, for those who place a deposit during the initial signup period, there would be no installation cost to the resident, no matter where they live, including those who have found such installation too expensive in the past. (The lone exception to this guarantee would be residents who do not have local utility poles providing service to their homes. In those rare instances, the fiber optic cable would need to be buried underground and may incur an additional expense.) After the initial pre-registration period ends, this promise of free installation would no longer be guaranteed, although Axiom and the Broadband Committee have talked about holding rolling enrollment periods in the future which could help reduce the installation costs for new enrollees after the initial pre-registration period is over.

What are the benefits of the proposed fiber optic infrastructure over the cable broadband or DSL service that most residents have currently? Speed and reliability are the most obvious benefits. Unlike the copper cable used currently for cable internet, which transmits data via electrical pulses, fiber optic cable transmits data using pulses of light through fine glass fibers and does not run into the same limitations as its copper counterpart. The speed at which data can be transmitted via fiber optic cable is primarily limited by the hardware at either end of the connection rather than the cable itself. Currently, internet service travels out from the servers of your internet provider as a digital signal via fiber optic cable, but then is converted to an analogue signal as it is passed on to legacy parts of the network that do not have fiber optics installed. This process of conversion slows down the signal by the time it arrives at your house. As service providers expand their fiber optic networks and replace more of the legacy copper wire with fiber optics, the speed we experience as consumers will increase, but it is still limited by the slowest point along the network.

The proposed fiber optic network would eliminate this bottleneck by installing fiber optic cable from each house in China back to an originating server with no conversion necessary in between.

Both copper and fiber optic cable suffer from something called “attenuation,” which is a degradation of the strength of the signal as it travels further from its source. The copper cables we currently use have a maximum length of 100 meters before they must be fed through a power source to amplify their signal. In contrast, fiber optic cables can run for up to 24 miles before any significant weakening of the signal starts to become a problem. Moving from copper cable to fiber optics would virtually eliminate problems from signal degradation.

Another downside to the present infrastructure is that each of those signal conversion or amplification boxes require power to do their job. This means that when the power goes out, it shuts off the internet because these boxes along the route will no longer function to push the signal along. The infrastructure proposed by the China Broadband Committee would solve this problem by installing fiber optics along the entire signal route leading back to a central hub station, which would be located in the town of China and powered by a propane generator that will automatically kick on when the power goes out. With the proposed system, as long as you have a generator at your house, your internet should continue to work – even during a localized power outage.

There’s an additional benefit to the proposed fiber optic network that residents would notice immediately. The current cable internet that most of us use is a shared service. When more people are using the service, everyone’s speed decreases. Most of us know that the internet is slower at 5 o’clock in the afternoon than it is at 3 in the morning. The proposed fiber optic network is different however. Inside the fiber optic cable are hundreds of individual glass strands that lead back to the network source. A separate internet signal can ride on each of these strands without interfering with the others. Hawkeye Connections, the proposed contractor for the physical infrastructure part of the project, would install cable with enough individual strands so that every house along its path could be connected via a different strand within the cable. This means that no one would be sharing a signal with anyone else and internet slowdown and speed fluctuations during peak usage should become a thing of the past.

Another change proposed by the CBC initiative would be to equalize upload and download speeds. Presently, download speeds are generally higher than upload speeds, which is a convention in the industry. This is a legacy of the cable TV networks from which they evolved. Cable TV is primarily a one-way street datawise. The video information is sent from the cable provider to your home and displayed on your TV. Very little data is sent the other way, from your home back to the cable provider. This was true of most data streams in the early days of the internet as well. We downloaded pictures, videos and webpages. Nearly all the data was traveling in one direction. But this is changing. We now have Zoom meetings, smart houses and interactive TVs. We upload more information than we used to, which means upload speed is more important than ever. This trend is likely to continue in the years ahead as more of our lives become connected to the internet. The internet service proposed by the Broadband Committee and Axiom, the company contracted to provide the service, would equalize upload and download speeds. For example, the first tier of the service would offer speeds of 50 megabits up and 50 megabits down. This, combined with the other benefits outlined above, should make Zoom meetings much more bearable.

What about costs for the consumer? The first level service tier would offer speeds of 50 megabits download and 50 megabits upload for $54.99 a month. Higher level tiers would include 100/100 for $64.99/month, 500/500 for $149.99/month, and a gigabit line for businesses at a cost of $199.99/month.

Now that we’ve looked at some of the advantages and benefits of the fiber optic infrastructure proposed by the China Broadband Committee, what about the objections? A number of residents voiced their opposition to the project on my Facebook post, so let’s take a look at some of those objections.

One of the most common reasons people are against the project is because they think there are other technologies that will make the proposed fiber optic network obsolete or redundant in the near future. The technologies most often referenced are 5G wireless and Starlink, a global internet initiative being built by tech billionaire and Tesla/SpaceX CEO Elon Musk.

While new 5G cellular networks are currently being rolled out nationwide, it’s not clear when the technology will be widely available here in China. And even when such capability does become available to most residents, it will likely suffer from similar problems that our existing cell coverage suffers from now – uncertain coverage on the outskirts of town and in certain areas. (I still can’t get decent cell reception at my home just off Lakeview Drive, in China Village.) Further, while 5G is able to provide impressive download speeds and low latency, it requires line of sight with the broadcasting tower and can easily be blocked by anything in between like trees or buildings. Residents of China who currently suffer from poor internet service or cell phone reception today would likely suffer from the same problems with 5G coverage as well. Fiber optic cable installation to those residents would solve that problem, at least in terms of internet access, once and for all.

Starlink is a technology that aims to deliver internet access to the world through thousands of satellites in low-earth orbit, but it is still years away from reaching fruition and there is no guarantee it will deliver on its potential. When I spoke with the Broadband Committee’s Tod Detre, he said he applied to be part of the Starlink beta program more than six months ago, and has only recently been accepted (although he’s still awaiting the hardware required to connect). There is also some resistance to the Starlink project, primarily from astronomers and other star gazers, who worry how launching so many satellites into orbit will affect our view of the night sky. As of June, Starlink has launched approximately 1,700 satellites into orbit and currently services about 10,000 customers. The initiative is estimated to cost at least $10 billion before completion. At the moment, the company claims to offer speeds between 50 and 150 megabits and hopes to increase that speed to 300 megabits by the end of 2021, according to a recent article on CNET.com. To compare, copper-based networks can support data transfer speeds up to 40 gigabits, and fiber optic wires have virtually no limit as they can send signals at the speed of light. Of course, these upper speeds are always limited by the capabilities of the hardware at either ends of the connection.

While both 5G and technologies like Elon Musk’s Starlink hold a lot of potential for consumers, 5G service is likely to suffer from the same problems residents are already experiencing with current technology, and Starlink is still a big unknown and fairly expensive at $99/month plus an initial cost of $500 for the satellite dish needed to receive the signal. It’s also fairly slow even at the future promised speed increase of 300 megabits. As the Broadband Committee’s chairman, Bob O’Connor, pointed out at a recent public hearing on the proposed network, bandwidth needs have been doubling every ten years and likely to continue increasing in a similar fashion for the near future.

Another objection frequently voiced by residents is that the town government should not be in the business of providing internet service to residents. O’Connor also addressed this concern in a recent public hearing before the China selectboard. He said that residents should think about the proposed fiber optic infrastructure in the same way they view roads and streets. (This is a particularly apt comparison since the internet is often referred to as the “information superhighway.”) O’Connor says that although the town owns the roads, it may outsource the maintenance of those roads to a subcontractor, in the same way that the town would own this fiber optic infrastructure, but will be subcontracting the service and maintenance of that network to Axiom.

The Broadband Committee also points out that there are some benefits that come with the town’s ownership of the fiber optic cable and hardware: if residents don’t like the service they are receiving from one provider they can negotiate to receive service from another instead. The committee has said that although Axiom would initially be contracted for 12 years, there would be a service review every three years to see if we are happy with their service. If not, we could negotiate with another provider to service the town instead. This gives the town significant leverage to find the best service available, leverage that we would not have if the infrastructure was owned by a service provider like Spectrum or Consolidated Communications (both of whom have shown little interest in the near term for upgrading the China area with fiber optic cable).

There are certainly risks and outstanding questions associated with the committee’s proposal. Will there be enough subscribers for the project to pay for itself? Could another technology come along that would make the proposed infrastructure obsolete or less attractive in the future? Will proposed contractors like Axiom and Hawkeye Connections (who will be doing the installation of the physical infrastructure) provide quality and reliable service to residents long-term? Can we expect the same level of maintenance coverage to fix storm damage and outages that we experience now?

On the other hand, the potential benefits of the project are compelling. The internet, love it or hate it, has become an essential part of everyday life and looks only to become more essential in the years ahead. Having a reliable and high speed infrastructure for residential internet access is likely to play an important role in helping to grow China’s economy and to attract young families who are looking for a place to live and work.

Ultimately, voters will decide if the potential benefits outweigh the possible risks and pitfalls come this November.

Contact the author at ericwaustin@gmail.com.

More information is also available on the CBC website, chinabroadband.net.

Read all of The Town Line’s coverage of the China Broadband Committee here.

ERIC’S TECH TALK: A primer for finding good information on the internet

by Eric W. Austin

The world is filled with too much information. We are inundated with information during nearly every moment of every day. This is a problem because much of it is simply spin and misinformation, and it can be difficult to separate the quality information from the background noise that permeates the internet.

I think being successful in this endeavor comes down to two things: learning to discern the quality sources from the sketchy ones, and getting in the habit of viewing a variety of sources before leaping to conclusions.

Let’s deal with the first one: quality sources. How do you determine the good sources from the bad?

To visualize the problem we’re dealing with, imagine a perfect source as a dot in the middle of a blank page. This hypothetical source is unbiased and completely reliable. (There is, of course, no such source or I would simply recommend it to you and this would be a very short article.)

Now imagine each and every source on the internet as another dot on this page. The distance each source is from the center dot is an indication of greater bias and lower reliability.

Oh, but you might complain, this is such a highly subjective exercise! And you would be absolutely right. Judging the quality of information on the web is not a hard science; it is a skill you need to develop over time, but it is also a skill which has become more and more essential to life in the modern age.

As a part of this mental exercise it’s important to be aware of the subjective weaknesses inherent in the human condition that are likely to trip you up. For example, we are much more likely to judge sources which align with our existing views as less biased than those sources which do not. So, you need to compensate for that when drawing the mental picture that I described above.

When I was learning to drive, our driver’s education teacher emphasized the importance of looking at both side mirrors, the rearview mirror and glancing over my shoulder before making any move in traffic such as changing lanes. Why wasn’t it sufficient to rely on only a single method to judge the safety of an action before taking it? Because each method has a blind spot which can only be compensated for by employing more than one tactic prior to making a decision. Using overlapping sources of information decreases the chances of missing something important.

Judging information on the internet is kind of like that: no one method is going to be sufficient and each will have a particular blind spot which can only be counterbalanced by employing multiple solutions.

Certain online resources can help you with drawing a more accurate picture of the sources on which you rely. The website MediaBiasFactCheck.com assesses more than 3,600 websites and news sources for bias and credibility across the internet on both the right and the left. Allsides.com is another resource which rates the political bias of websites and often places news stories from the left and right side by side so you can see how specific information is being presented. Allsides also has a handy chart rating the bias of the most well-known news sources from across the political spectrum. I don’t always perfectly agree with the ratings these sites supply (and neither will you), but it is a good place to start and should be another tool in your information-analysis utility belt.

If you are confronted with a source you do not have any prior experience with, search for it using the above resources and also do a web search for the name of the website. There may be a Wikipedia page about it that will tell you where the site’s funding comes from and whether the site has been caught peddling false information in the past. A web search may also dig up stories by other news sources reporting on false information coming from that website. There is nothing news sources like better than calling out their rivals for shoddy reporting. Use that to your advantage.

If a web search for the site turns up nothing, that could be a warning signal of its own. On the internet, it is absurdly easy to throw up a website and fill it with canned content, interspersed with propaganda or conspiracy theories to draw internet clicks and advertising dollars. It is becoming increasingly common for politically motivated groups to create credible-looking news sites in order to push a specific ideological agenda, so look for sources with some history of credibility.

So, what about bias? Isn’t everything biased? Well, yes, which is why our unbiased and perfectly reliable source above is only hypothetical. The skill you must develop is in determining how far each source is from matching that hypothetical ideal, and then building a well-rounded collection of credible sources representing various points of view.

One thing that must be mentioned is that bias and credibility are not mutually exclusive. Although sources that are highly biased are also more likely to lack credibility, this is not necessarily a strict correlation. In determining the credibility of a source, bias is only one of the factors to consider.

Let’s take a look at two news sources on opposite sides of the political spectrum: Fox News and CNN.

Initially, you might be tempted to think these sources are the worst examples to use in a discussion of reliable sources because of their high level of bias, but I would like to argue the opposite. First, it is important to recognize the difference between news and opinion. Most large news organizations separate their news reporters from their opinion commentators. If a website does not make this difference very apparent to the consumer then that may not be a source you want to trust. Separating news from editorial content is a standard policy because bias is a well-known problem for most news organizations and separating these two areas is a safeguard against too much opinion bleeding into their news. Of course, this is not a perfect solution, but such a precaution is better than nothing, and smaller niche sites often do not have the resources or desire to make this distinction.

This does not mean that smaller niche sites cannot be valuable sources of information, especially if that information is of a sort in which the site specializes, but it is something to consider when evaluating the validity of information, especially about controversial topics.

Another reason to include several high profile news sites from both sides of the aisle in your list of sources is that any missteps by these organizations are less likely to escape notice than smaller niche news sites. You can bet CNN will be quick to pounce on any sort of shoddy reporting put out by Fox News and vice versa.

So, bias is not necessarily a bad thing. It is important that we have right-leaning news organizations to rigorously investigate left-leaning administrations, just as it’s important to have left-leaning news organizations to report on right-leaning administrations. That is the beautiful mess that is the American free press. Your best bulwark against bias is to have a diversity of credible sources at your disposal representing a wide range of viewpoints.

Remember that the best safeguard against our own biases is to seek out opposing opinions in order to constantly challenge our preconceptions and force ourselves to regularly reevaluate our conclusions. Nobody is right all the time, and most of us are wrong more often than we’d like to admit. Cognitive dissonance – that sense of discomfort we feel when encountering information which threatens to upend our carefully set up boundaries and views of the world – is not something to run from but to embrace. Finding out you are wrong is often the only way to discover what is right.

Eric W. Austin writes about local issues and technology. He can be reached at ericwaustin@gmail.com.

ERIC’S TECH TALK: The 5G future and the fight with China

by Eric W. Austin

There’s a new wireless technology being rolled out this year that promises to be the biggest technological revolution since the invention of the cell phone. Dubbed 5G NR (“fifth generation new radio”), this isn’t just an upgrade to the existing 4G cellular network, but a radical reinvention of wireless technology that will require an enormous investment in new infrastructure, but also promises massive improvements in bandwidth, speed and latency.

This new cellular technology achieves these incredible improvements by making fundamental changes to the way cellular networks function. Whereas the old 4G technology used radio waves in the microwave band between 700 MHz and 3 GHz to communicate, 5G will tap into previously unused radio frequencies in ranges from 30 Ghz to 300 Ghz, known as millimeter bands. In addition, the new 5G technology will transmit across wider frequency channels of up to 400 Mhz, compared with 4G’s limit of only 20 Mhz.

Now, that may sound like a lot of technobabble, but it has real world implications, so let me explain.

A radio wave can be imagined as a wavy line traveling through space at the speed of light. Information is transmitted by manipulating the crests and valleys that make up that wavy line, much like the dots and dashes in Morse Code. The number of crests and valleys in a radio wave that pass a point in space in a specific amount of time determines the quantity of information transmitted. This is called the frequency of a radio wave. Since you can’t increase the speed at which a radio wave travels (it will always travel at the speed of light), the only way to increase information transfer is to increase the number of crests and valleys within a single radio wave. This is done by increasing its frequency. You can think of this as the difference between a wavy piece of string and a tightly coiled spring. While both the string and the spring are made from material of the same length, the spring will contain a greater number of crests and valleys and take up considerably less space. This is the basic concept behind the move in 5G to transmit using higher frequency radio waves.

Since the higher frequency radio waves of 5G technology are capable of transmitting a much greater amount of data than earlier microwave-based 4G technology, one can reasonably ask, why aren’t we using it already? The answer is simple. These high-frequency waves are much smaller, with their crests and valleys more tightly packed together, and therefore require receivers which are much more sensitive and difficult to manufacture. While such receivers have been available for military applications for a number of years, it has taken time for it to become cost effective to produce such receivers for wider commercial use. That time has now come.

The ability to fit more information into smaller transmissions, in addition to the use of wider frequency channels, means a hundredfold increase in data transfer times, and lower power consumption for devices.

However, there are also some significant downsides to using these higher frequencies. While millimeter waves can pack more information into a single broadcast, their shorter wavelength means they can also be easily blocked by obstacles in the environment and absorbed by atmospheric gases. Although the antennas needed to receive these transmissions will be much smaller than the giant cell towers in use today, we will need more of them because 5G antennas require line-of-sight in order to receive transmissions. Instead of cell towers every few miles, as we have for our current 4G/3G cellular network, hundreds of thousands of smaller antennas will have to be installed on office buildings, telephone poles and traffic lights.

This new 5G technology couldn’t have been implemented earlier because it requires the existing fourth generation infrastructure already in place in order to make up for these deficiencies.

While the new 5G technology has some real benefits to human user experience, like having enough bandwidth to stream 50 4K movies simultaneously, speeds that are 20 times greater than the average U.S. broadband connection, and the ability to download a high definition movie in less than a second, the real excitement lies in how this upgrade will benefit the machines in our lives.

A confluence of technologies ripening in the next few years are set to revolutionize our lives in a way that promises to be greater than the sum of the individual parts: this new, high-speed 5G cellular upgrade; artificial intelligence; and the rapidly widening world of the Internet of Things (IoT). These three technologies, each with astonishing potential on their own, will combine to change our lives in ways that we can only begin to imagine.

I have spent this article talking about 5G, and you have likely heard a bit about the emerging field of artificial intelligence, but the final item on this list, the Internet of Things, bears a bit of explaining. The Internet of Things is an industry buzzword referencing the increasing level of sophistication built into everyday appliances. Your car now routinely has cameras, GPS locators, accelerometers and other sensors installed in it. Soon nearly every electrical device in your house will be similarly equipped. In the future, when you run out of milk, your refrigerator will add milk to a list of needed items stored in the cloud. On your way to the grocery store, your home A.I. will send a message ahead of you and robots at the store will prepare a shopping cart with the requested items, which will be waiting for you when you arrive. Stepping through your front door after a long day at work, your phone will ping you with a list of recipes you can prepare for dinner based on the items you’ve recently purchased.

This is the Internet of Things. It’s every device in your life quietly communicating behind the scenes in order to make your life easier. Although this idea might seem a bit creepy at first, it’s coming whether you like it or not. According to statistics website Statista.com, there are currently more than 26 billion devices worldwide communicating in this way. By 2025, that number is expected to top 75 billion.

The upgrade to 5G, with its increases in speed and bandwidth, is not so much a benefit to us humans as it is an aid to the machines in our life. As more and more devices come on line and begin to communicate with each other, the demand for greater speed and bandwidth will increase exponentially. Soon the devices in your house will be using more bandwidth than you are.

There are also some significant security concerns arising from the need to build additional infrastructure to support the new 5G network. It will require the installation of billions of antennas and 5G modems across the world, in every town, city and government building. But who will build them? According to a February 2019 article in Wired magazine, “as of 2015, China was the leading producer of 23 of the 41 elements the British Geological Society believes are needed to ‘maintain our economy and lifestyle’ and had a lock on supplies of nine of the 10 elements judged to be at the highest risk of unavailability.” With this monopoly on the materials needed for high tech production, Chinese companies like Huawei, which is already the largest telecommunications manufacturer in the world, are set to corner the market on 5G equipment.

You may have heard of Huawei from the news recently, as the U.S. government has accused the company of everything from violating international sanctions to installing backdoors in the hardware they manufacture on behalf of the Chinese government. China’s second largest telecommunications company, ZTE, who is also looking to seize a piece of the emerging 5G pie, has been the subject of similar accusations, and last year paid more than $1.4 billion in fines for violating U.S. sanctions against Iran and North Korea.

Do we really want to build our communications infrastructure with equipment made by companies with close ties to the Chinese government? It’s a real concern for security experts in the U.S. and other western countries. Fortunately, European companies like Nokia and Ericsson, South Korea’s Samsung and California’s Cisco Systems are emerging as threats to this Chinese monopoly.

The new technology of 5G is set to revolutionize cellular communications in the next few years, but the real story is how the confluence of technologies like artificial intelligence and the Internet of Things, in combination with this upgrade in communications, will change our lives in ways we can’t possibly foresee. The 5G future will be glorious, exciting, and fraught with danger. Are you ready for it?

ERIC’S TECH TALK: Where are all the aliens?

by Eric W. Austin

(The views of the author in the following column are not necessarily those of The Town Line newspaper, or its staff and board of directors.)

Where is everybody? That’s the question posed by Italian physicist and Nobel Prize winner Enrico Fermi in 1950 over a casual lunch with his fellow physicists at the famous Los Alamos National Laboratory, in New Mexico.

To understand Fermi’s question and why he’s asking it, we must first review a bit of background on Earth’s own rocky road to life.

The earth formed, scientists tell us, about 4.5 billion years ago, 9 billion years after the Big Bang. From a cosmological standpoint, the earth is a bit of a late-bloomer.

After forming, Earth was a hot ball of glowing, molten rock – much too hot for life – for nearly half a billion years, but eventually the surface cooled enough for the first oceans to form. Now the stage was set for life, but once conditions were favorable, how long did it take for life to develop on the new planet?

The answer, surprisingly, is not very long. According to some estimates, it may have been as few as a hundred million years after the earth cooled. That is, from the perspective of the Universe, hardly any time at all, just a geological blink of the cosmic eye.

Assuming this is true of life across the universe and not simply a cosmic fluke when it comes to Earth, we would expect star systems which formed much earlier than our own to have developed life billions of years before ours did.

Add to this the understanding that it has taken our species less than half a million years to go from tree-dwelling primates to radio-broadcasting prima donnas and it suggests that any civilization with as little as a million-year head start on us would have spread across half the galaxy before we had even crawled out of the trees.

So, where are all the aliens?

This question has perplexed scientists for more than half a century and is known as the Fermi Paradox.

“Fermi realized that any civilization with a modest amount of rocket technology and an immodest amount of imperial incentive could rapidly colonize the entire Galaxy,” says Seth Shostak, a senior astronomer at the Institute for the Search for Extraterrestrial Intelligence (SETI). “Within ten million years, every star system could be brought under the wing of empire.”

He continues, “Ten million years may sound long, but in fact it’s quite short compared with the age of the Galaxy, which is roughly ten thousand million years. Colonization of the Milky Way should be a quick exercise.”

This creates a bit of a quandary for those seeking for intelligent life out beyond our solar system. On one hand, life appeared on Earth very early in its history – almost immediately, once conditions were right – so we would expect life to have appeared elsewhere in the universe as expeditiously as it did here on Earth. Since there are many stars much older than our sun, it only makes sense that life would pop up in many parts of the universe long before it did here on Earth.

On the other hand, it’s hard to get past the fact that we haven’t yet found any signs of life – not a smidge, smudge or random radio signal beamed out from Alpha Centauri. Nothing. Nada. Zilch.

There must be something wrong with this picture.

Maybe, goes the thinking of some scientists, our assumption that life appeared very quickly here on Earth is wrong simply because the underlying assumption that life originated on Earth is wrong.

In other words, maybe life didn’t originate on Earth at all. Maybe it came from somewhere else. This idea is called the Panspermia Theory for the origin of life. The theory posits that life originated elsewhere in the universe and traveled here early in Earth’s history by way of an interstellar asteroid or meteor. Some scientists have even speculated that the impact resulting in the formation of our moon also brought with it the first microbes, seeding Earth with the life that would eventually evolve into you and me.

Where, though, did it come from? With a hundred billion stars in the observable universe, there’s a lot of places to choose from, but there is one very real possibility closer to home.

I’m speaking of Mars, the fourth planet from the sun, named for the Roman god of war. Mars is a little older than the earth at about 4.6 billion years, and although both planets began as fiery balls of molten rock, because Mars is located further from the sun and is only half the size of the earth, it cooled much faster. Scientists believe the now-dead planet was once covered with water and had a very temperate climate sometime in the distant past. The famous “canals” of Mars were not made by little green men, but carved by liquid water flowing across its surface.

When the fires of Mars’ molten core began to die, more than 4 billion years ago, the planet lost its atmosphere and was eventually freeze-dried by the relentless solar winds. At that point, any life it had either died or retreated far beneath the planet’s surface.

What this all means is that conditions were right for life on Mars hundreds of millions of years before conditions were right for it here on Earth.

If we are willing to accept that life sprung up on Earth in a very short time (geologically-speaking), then couldn’t the same also be true of Mars? If that is correct, then life could have appeared on Mars while Earth was still a smoldering hellscape. And if we grant these two suppositions, it is a small leap to think that the life we see on Earth actually originated first on Mars and traveled here early in our history.

Are we all originally Martians? It’s an intriguing possibility, and it’s a question to which we may soon have an answer. NASA’s InSight lander touched down on Mars just last month. They recently released the first recordings of a Martian wind rippling across the dusty planet, and the space agency currently has plans for a manned mission to Mars sometime in the 2030s. Once soil samples are brought back for analysis, we may finally be able to determine whether our conjecture of past life on Mars is true. It might also tell us whether that past life bears any resemblance to the life we find on Earth.

So, the next time you’re looking up at the night sky, admiring the cosmic majesty allowed by Maine’s clear view of the stars, give a little wave. Someone, somewhere may be looking back at you and giving a little wave of their own. They might even be your distant relative.

Eric W. Austin writes about technology and community issues. He can be reached by email at ericwaustin@gmail.com.

ERIC’S TECH TALK: Surviving the surveillance state

An artist’s rendering of a Neanderthal.

by Eric W. Austin

Let me present you with a crazy idea, and then let me show you why it’s not so crazy after all. In fact, it’s already becoming a reality.

About ten years ago, I read a series of science-fiction novels by Robert J. Sawyer called The Neanderthal Parallax. The first novel, Hominids, won the coveted Hugo Award in 2003. It opens with a scientist, Ponter Boddit, as he conducts an experiment using an advanced quantum computer. Only Boddit is not just a simple scientist, he’s a Neanderthal living on a parallel Earth where the Neanderthal survived to the modern era, rather than us homo sapiens.

Contrary to common misconception, the Neanderthal were not our progenitors, but a species of human which co-existed with us for millennia before mysteriously dying off about 28,000 years ago, during the last ice age. Based on DNA evidence, modern humans and Neanderthal shared a common ancestor about 660,000 years in the past.

Scientists debate the causes of the Neanderthal extinction. Were they less adaptable to the drastic climate changes happening at the time? Did conflict with our own species result in their genocide? Perhaps, as some researchers have proposed, homo sapiens survived over their Neanderthal cousins because we had a greater propensity for cooperation.

In any case, the traditional idea of Neanderthal as dumb, lumbering oafs is not borne out by the latest research, and interbreeding between Neanderthal and modern humans was actually pretty common. In fact, those of us coming from European stock have received between one and four percent of our DNA from our Neanderthal forebearers.

The point I’m trying to make is that it could as easily have been our species, homo sapiens, which died off, leaving the Neanderthal surviving into the modern age instead.

This is the concept author Robert Sawyer plays with in his trilogy of novels. Sawyer’s main character, the Neanderthal scientist Ponter Boddit, lives in such an alternate world. In the novel, Boddit’s quantum experiment inadvertently opens a door to a parallel world — our own — and this sets up the story for the rest of the series.

The novels gained such critical praise at the time of their publication not just because of their seamless weaving of science and story on top of a clever premise, but also because of the thought Sawyer put into the culture of these Neanderthal living on an alternate Earth.

The Neanderthal, according to archeologists, were more resilient and physically stronger than their homo sapien cousins. A single blow from a Neanderthal is enough to kill a fellow citizen, and in consequence the Neanderthal of Sawyer’s novels have taken drastic steps to reduce violence in their society. Any incident of serious physical violence results in the castration of the implicated individual and all others who share at least half his genes, including parents, siblings and children. In this way, violence has slowly been weeded out of the Neanderthal gene pool.

A comparison between human (left) and Neanderthal (right) skulls.

About three decades before the start of the first novel, Hominids, a new technology is introduced into Neanderthal society to further curb crime and violence. Each Neanderthal child has something called a “companion implant” inserted under the skin of their forearm. This implant is a recording device which monitors every individual constantly with both sound and video. Data from the device is beamed in real-time to a database dubbed the “alibi archive,” and when there is any accusation of criminal conduct, this record is available to exonerate or convict the individual being charged.

Strict laws govern when and by whom this information can be accessed. Think of our own laws regarding search and seizure outlined in the Fourth Amendment to the Constitution.

By these two elements — a companion implant which monitors each citizen 24/7, and castration as the only punishment for convicted offenders — violence and crime have virtually been eliminated from Neanderthal society, and incarceration has become a thing of the past.

While I’m not advocating for the castration of all violent criminals and their relations, the idea of a companion implant is something that has stuck with me in the years since I first read Sawyer’s novels.

Could such a device eliminate crime and violence from our own society?

Let’s take a closer look at this idea before dismissing it completely. One of the first objections is about the loss of privacy. Constant surveillance? Even in the bathroom? Isn’t that crazy?

Consider this: according to a 2009 article in Popular Mechanics magazine, there are an estimated 30 million security cameras in the United States, recording more than four billion hours of footage every week, and that number has likely climbed significantly in the nine years since the article was published.

Doubtless there’s not a day that goes by that you are not captured by some camera: at the bank, the grocery store, passing a traffic light, going through the toll booth on the interstate. Even standing in your own backyard, you are not invisible to the overhead gaze of government satellites. We are already constantly under surveillance.

Add to this the proliferation of user-generated content on sites like Facebook, Twitter and Instagram. How often do you show up in the background of someone else’s selfie or video podcast?

Oh, you might say, but these are random bits, scattered across the Internet from many different sources. We are protected by the very diffusion of this data!

To a human being, perhaps this is true, but for a computer, the Internet is one big database, and more and more, artificial intelligences are used to sift through this data instead of humans.

Take, for example, Liberty Island, home of the Statue of Liberty. A hot target for terrorists, the most visited location in America is also the most heavily surveilled. With hundreds of cameras covering every square inch of the island, you would need an army of human operators to watch all the screens for anything out of place. This is obviously unfeasible, so they have turned to the latest in artificial intelligence instead. AI technology can identify individuals via facial recognition, detect if a bag has been left unattended, or send an alert to its human operators if it detects anything amiss.

And we are not only surveilled via strategically placed security cameras either. Our credit card receipts, phone calls, text messages, Facebook posts and emails all leave behind a digital trail of our activities. We are simply not aware of how thoroughly our lives are digitally documented because that information is held by many different sources across a variety of mediums.

For example, so many men have been caught in their wandering ways by evidence obtained from interstate E-ZPass records, it’s led one New York divorce attorney to call it “the easy way to show you took the off-ramp to adultery.”

And with the advancements in artificial intelligence, especially deep learning (which I wrote about last week), this information is becoming more accessible to more people as computer intelligences become better at sifting through it.

We have, in essence, created the “companion implant” of Sawyer’s novels without anyone ever having agreed to undergo the necessary surgery.

The idea of having an always-on recording device implanted into our arms at birth, which watches everything we do, sounds like a crazy idea until you sit down and realize we’re heading in that direction already.

The very aspect that has, up ‘til now, protected us from this constant surveillance — the diffusion of the data, the fact that it’s spread out among many different sources, and the great quantity of data which makes it difficult for humans to sift through — will soon cease to be a limiting factor in the coming age of AI. Instead, that diffusion will begin to work against us, since it is difficult to adequately control access to data collected by so many different entities.

A personal monitoring device, which records every single moment of our day, would be preferable to the dozens of cameras and other methods which currently track us. A single source could be more easily protected, and laws governing access to its data could be more easily controlled.

Instead, we have built a surveillance society where privacy dies by a thousand cuts, where the body politic lies bleeding in the center lane of the information superhighway, while we stand around and complain about the inconvenience of spectator slowing.

Eric W. Austin writes about technology and community issues. He can be reached by email at ericwaustin@gmail.com.