Economics & Technology – Relatively Interesting http://www.relativelyinteresting.com Fri, 23 Feb 2018 19:12:54 +0000 en-US hourly 1 39163838 Millions, billions, trillions: How to make sense of numbers in the news http://www.relativelyinteresting.com/millions-billions-trillions-make-sense-numbers-news/ http://www.relativelyinteresting.com/millions-billions-trillions-make-sense-numbers-news/#respond Fri, 05 Jan 2018 21:29:01 +0000 http://www.relativelyinteresting.com/?p=9700 Relatively Interesting -

Andrew D. Hwang, College of the Holy Cross National discussions of crucial importance to ordinary citizens – such as funding for scientific and medical research, bailouts of financial institutions and the current Republican tax proposals – inevitably involve dollar figures in the millions, billions and trillions. Unfortunately, math anxiety is widespread even among intelligent, highly […]

The post Millions, billions, trillions: How to make sense of numbers in the news appeared first on Relatively Interesting.

]]>
Relatively Interesting -

Andrew D. Hwang, College of the Holy Cross

National discussions of crucial importance to ordinary citizens – such as funding for scientific and medical research, bailouts of financial institutions and the current Republican tax proposals – inevitably involve dollar figures in the millions, billions and trillions.

Unfortunately, math anxiety is widespread even among intelligent, highly educated people.

Complicating the issue further, citizens emotionally undeterred by billions and trillions are nonetheless likely to be ill-equipped for meaningful analysis because most people don’t correctly intuit large numbers.

Happily, anyone who can understand tens, hundreds and thousands can develop habits and skills to accurately navigate millions, billions and trillions. Stay with me, especially if you’re math-averse: I’ll show you how to use school arithmetic, common knowledge and a little imagination to train your emotional sense for the large numbers shaping our daily lives.

This is what 1,000 dots looks like. (Source:  Wait But Why)

Estimates and analogies

Unlike Star Trek’s Mr. Spock, scientists and mathematicians are not exacting mental calculators, but habitual estimators and analogy-makers. We use “back of the envelope” calculations to orient our intuition.

The bailout of AIG after the mortgage-backed securities crisis cost more than US$125 billion. The Panama Papers document upward of $20 trillion hidden in a dark labyrinth of shell companies and other tax shelters over the past 40 years. (The recently published Paradise Papers paint an even more extensive picture.) On the bright side, we recovered $165 million in bonuses from AIG executives. That’s something, right?

Let’s find out: On a scale where a million dollars is one penny, the AIG bailout cost taxpayers $1,250. The Panama Papers document at least $200,000 missing from the world economy. On the bright side, we recovered $1.65 in executive bonuses.

In an innumerate world, this is what passes for fiscal justice.

Let’s run through that again: If one penny represents a million, then one thousand pennies, or $10, represents a billion. On the same scale, one million pennies, or $10,000, represents a trillion. When assessing a trillion-dollar expenditure, debating a billion dollars is quibbling over $10 on a $10,000 purchase.

Here, we’ve scaled monetary amounts so that “1,000,000” comprises one unit, then equated that unit to a familiar – and paltry – quantity, one penny. Scaling numbers to the realm of the familiar harnesses our intuition toward understanding relative sizes.

In a sound bite, a savings of $200 million might sound comparable to a $20 trillion cost. Scaling reveals the truth: One is a $2 (200-cent) beverage, the other the $200,000 price of an American home.

If time were money

Suppose you landed a job paying $1 per second, or $3,600 per hour. (I assume your actual pay, like mine, is a tiny fraction of this. Indulge the fantasy!) For simplicity, assume you’re paid 24/7.

At this rate, it would take one million seconds to acquire $1 million. How long is that in familiar terms? In round numbers, a million seconds is 17,000 minutes. That’s 280 hours, or 11.6 days. At $1 per second, chances are you can retire comfortably at the end of a month or few.

At the same job, it takes 11,600 days, or about 31.7 years, to accumulate $1 billion: Doable, but you’d better start young.

To acquire $1 trillion takes 31,700 years. This crummy job doesn’t pay enough!

This analogy gives a taste for the absolute size of a billion, and perhaps of a trillion. It also shows the utter impossibility of an ordinary worker earning $1 billion. No job pays a round-the-clock hourly wage of $3,600.

Nice work if you can get it

Let’s examine the wealth of actual multi-billionaires. Our calculations prove that they acquired more than $1 per second over long intervals. How much more?

Testifying to the Senate Judiciary Committee on July 27, William Browder, an American-born businessman with extensive Russian dealings, estimated that Vladmir Putin controls assets of $200 billion. Let’s assume this figure is substantially correct and that Putin’s meteoric rise began 17 years ago, when he first became president of Russia. What is Putin’s average income?

Seventeen years is about 540 million seconds; $200 billion divided by this is … wow, $370 per second. $1,340,000 per hour. Yet even at this colossal rate, acquiring $1 trillion takes 85 years.

The Panama Papers document some $20 trillion – the combined fortunes of one hundred Vladimir Putins – sequestered in shell companies, untaxed and untraceable. Though the rate of leakage has surely increased over time, for simplicity let’s assume this wealth has bled steadily from the global economy, an annual loss around $500 billion.

How much is this in familiar terms? To find out, divide $500 billion by 31.6 million seconds. Conservatively speaking, the Panama Papers document an ongoing loss averaging $16,000 per second, around the clock, for 40 years.

This is what 15 trillion dollars looks like.

Fighting over scraps

American cities are now vying for a $5 billion Amazon headquarters, a windfall to transform the local economy lucky enough to win the contract. At the same time, the world economy hemorrhages that amount into a fiscal black hole every few days. Merely stemming this Niagara (not recovering the money already lost) would amount to one hundred new Amazon headquarters per year.

The ConversationThe root cause of our economic plight looms in plain sight when we know the proper scale on which to look. By overcoming math phobia, wielding simple arithmetic, refusing to be muddled by “gazillions,” we become better citizens, avoiding squabbling over pennies when tens of thousands of dollars are missing.

Andrew D. Hwang, Associate Professor of Mathematics, College of the Holy Cross

This article was originally published on The Conversation. Read the original article.

The post Millions, billions, trillions: How to make sense of numbers in the news appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/millions-billions-trillions-make-sense-numbers-news/feed/ 0 9700
2017 Witnessed an Incredible Bitcoin Boom – Just How Big Can it Become? http://www.relativelyinteresting.com/2017-witnessed-incredible-bitcoin-boom-just-big-can-become/ http://www.relativelyinteresting.com/2017-witnessed-incredible-bitcoin-boom-just-big-can-become/#respond Fri, 05 Jan 2018 21:12:07 +0000 http://www.relativelyinteresting.com/?p=9748 Relatively Interesting -

If you were around in 2017 and you read financial news, you probably will have heard of bitcoin by now. Or, at least, you know that something exists and it goes by the name of bitcoin. Last year, the currency went absolutely crazy, and the value of one bitcoin reached an all-time high of $19,000 […]

The post 2017 Witnessed an Incredible Bitcoin Boom – Just How Big Can it Become? appeared first on Relatively Interesting.

]]>
Relatively Interesting -

If you were around in 2017 and you read financial news, you probably will have heard of bitcoin by now. Or, at least, you know that something exists and it goes by the name of bitcoin. Last year, the currency went absolutely crazy, and the value of one bitcoin reached an all-time high of $19,000 in early December. Considering it started the year at around the $1,000 mark, this is an incredible surge which highlights the fact that the world is waking up to the e-currency. Could this be a bubble like some have suggested, or will the futuristic payment method soar to even greater heights in 2018?

Even though bitcoin has been all over the news in recent months due to its drastic spark in value in the latter stages of the year, there are still many people who don’t know about the e-currency. In fact, according to a study conducted by Lottoland, the brand behind the first Bitcoin lottery, one in seven people think that bitcoin was a board game. 79 percent of the respondents had actually heard of bitcoin, but only two percent had invested in it. This shows that the e-currency still has a long way to go.

Bitcoin was invented in 2009 by a mysterious programmer called Satoshi Nakamoto, who is still unknown to the public apart from that name. The idea was to have an internet currency which would bypass the banks and could be used from anywhere in the world. At first, the idea seemed like a novelty, and interest was only shown by other computer experts who were mildly intrigued.

But, as technology has progressed rapidly and more and more businesses are existing solely online, the need for one universal currency is growing.  Now, experts are realizing that bitcoin could be this world currency and, in the same way as the email surpassed the regular post, it could also blow regular currencies out of the water, at least in an online setting.

The fact that bitcoin grew by around 1700 percent last year with only two percent of the British population having invested in it highlights just how big it could become if it is adopted by the masses. As it stands, people are investing in the idea of bitcoin becoming the future go-to currency. This means to say that it is all still speculative. If it does take the world by storm, then those who got on it in the early stages stand to benefit greatly. Others, who are perhaps more cynical, are holding off on the cryptocurrency because they don’t believe that bitcoin will reach the heights that many are expecting.

Bitcoin has a history of large fluctuations in price, but the strong incline in 2017 due to more people waking up to the cryptocurrency has proven that it has the potential to reach astronomical levels. The only thing that could possibly halt bitcoin’s rise in 2018 is the emergence of other cryptocurrencies that are vying to be number one. Ethereum and Litecoin have also risen dramatically in the past year, and are making strong cases for mass usage as well.

If bitcoin does enjoy another successful year similar to 2017, then there is a real possibility that one coin could soon be worth $1 million. If this happens, then those holding it now will have made a serious amount of profit.

 

The post 2017 Witnessed an Incredible Bitcoin Boom – Just How Big Can it Become? appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/2017-witnessed-incredible-bitcoin-boom-just-big-can-become/feed/ 0 9748
Always Online: The Evolution of Online Gaming in Recent Years http://www.relativelyinteresting.com/always-online-evolution-online-gaming-recent-years/ http://www.relativelyinteresting.com/always-online-evolution-online-gaming-recent-years/#respond Mon, 27 Nov 2017 15:27:22 +0000 http://www.relativelyinteresting.com/?p=9685 Relatively Interesting -

Ever since the inception of video games, the ability to play with friends has been a core desire. Whether it’s Pong on the original Atari, Street Fighter down the local arcades, or a four-player Golden Eye deathmatch on the Nintendo 64, the competitive element of gaming has always been there. These days you’d have to […]

The post Always Online: The Evolution of Online Gaming in Recent Years appeared first on Relatively Interesting.

]]>
Relatively Interesting -

Ever since the inception of video games, the ability to play with friends has been a core desire. Whether it’s Pong on the original Atari, Street Fighter down the local arcades, or a four-player Golden Eye deathmatch on the Nintendo 64, the competitive element of gaming has always been there. These days you’d have to go out of your way to find a game that didn’t have some form of online connectivity to facilitate competitive multiplayer, especially in the so-called ‘triple-A’ market.  But how did we come so far in recent years got to this point in such a short space of time?  Hopefully, this brief article will answer some of those questions!

 

The Pioneers

As soon as the internet became widely used in the late 90s/early 2000s, game developers were trying to find ways to develop online games. However, the very first adopter of the online space was Neverwinter Nights, all the way back in 1991. This Dungeons and Dragons based Role-playing game, developed by Stormfront Studios, allowed players to compete with each other on the ‘ladder’ system – essential a leaderboard of points. It even had player versus player capabilities years before Call of Duty ever hit the scene! This was truly the first game to take advantage of the connectivity of the internet and acted as the forerunner for thousands of games to come.

 

Enter WoW

Anyone who knows online gaming has heard of World of Warcraft. This massive multiplayer online game was one of the biggest success of its day, drawing over 12 million players at its peak in 2008. While it certainly wasn’t the first one to take advantage of online play, it was definitely one of the most successful, which is why today people still fondly remember their time questing across Azeroth. It’s loot-based level system was revolutionary, and it’s still one of the best examples of how gaming makes you feel great.

For many people, this was their first experience in online gaming. When it launched in 2005, console gaming hadn’t quite perfected their online presence, while PC games were become more and more popular due to faster internet connectivity. This culminated in World of Warcraft being one of the most successful games of the time, with nearly $10 billion in revenue made over its lifetime. On top of that, there’s no doubt that its subscription-based format has influenced how giants of the industry (such as Microsoft and Sony) operate their online networks today. Without the early success of Wow, which not only proved that competitive online gaming could work, but that it could be financially successful, online gaming would not be as big as it is today.

 

The Online Casino Explosion

Ever since online gaming was proven to be a viable financial option, plenty of other gaming mediums have jumped on the bandwagon. One of the major additions, which has gone on to become one of the most successful industries in the online gaming space, is online casinos. The first online casino was launched in 1996, but they didn’t really gather the audience they have today until the late 2000s and beyond. Now, with better graphics, server space, and mechanics, online casino games are one of the largest markets on the internet, with hundreds of millions of player worldwide. Furthermore, online casinos have taken cues from other gaming platforms, allowing players to interact with each other as well as the dealer, and introduce live casinos to improve the immersion.

 

Mass Connectivity, Today

Online gaming now dominates the video game market. Nearly every game, regardless of its genre, has an online element. Some games even require an online connection to run at all! This has led to a new style of gaming, with online features becoming the main focus of games, not only for the players but also for the developers and the publishers. For example, Call of Duty, one of the most successful FPS franchises of all time, is primarily bought for its online components rather than its single-player campaign, which has led to some criticism. Some games, such as EA and Dice’s Star Wars Battlefront (2015) are designed solely for online, competitive play, with no real single-player elements.

Smaller independent titles have spotted this shift in the marketplace and used it to their advantage. Rocket League (2015), a football-inspired driving game, was a huge success in its year of release and was primarily designed around online matchmaking and gameplay.

Overall, faster internet speeds and easier accessibility mean that more people are playing online, and to feed this desire, game developers are putting more online elements in their games. So when we look at the evolution of online gaming, from retro RPGs to massive online experiences, it’s clear that it’s been driven by players wanting to connect with each other. Whether it’s for competitive gameplay like in Call of Duty or Halo, or for jolly cooperation like in Dark Souls or WoW, player demand has seen the online gaming space skyrocket, and it doesn’t look like slowing down.

The Future

With online gaming in its prime, where can we go from here? Well, with the emergence of VR gaming as a viable platform, it makes sense that online VR experiences will be the next big thing. Whether it’s in massive multiplayer communities or on a smaller scale, the ability to interact with friends in a virtual reality space will push gaming to the next level of immersion.

The post Always Online: The Evolution of Online Gaming in Recent Years appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/always-online-evolution-online-gaming-recent-years/feed/ 0 9685
The Ethical Dilemma: Self-Driving Cars and the Laws of Robotics http://www.relativelyinteresting.com/ethical-dilemma-self-driving-cars-laws-robotics/ http://www.relativelyinteresting.com/ethical-dilemma-self-driving-cars-laws-robotics/#respond Sun, 01 Oct 2017 19:36:25 +0000 http://www.relativelyinteresting.com/?p=9555 Relatively Interesting -

Self-driving cars, autonomous cars, driverless cars – regardless of what you want to call them – are expected to revolutionize the entire automobile industry. For over a century, cars have consisted of a fairly straightforward combination of wheels, steering system, engine, and driver. It’s no wonder that the this new technology has launched a global […]

The post The Ethical Dilemma: Self-Driving Cars and the Laws of Robotics appeared first on Relatively Interesting.

]]>
Relatively Interesting -

Self-driving cars, autonomous cars, driverless cars – regardless of what you want to call them – are expected to revolutionize the entire automobile industry. For over a century, cars have consisted of a fairly straightforward combination of wheels, steering system, engine, and driver. It’s no wonder that the this new technology has launched a global race that has automakers and tech companies scrambling to develop the best autonomous vehicle technology. And according to Morgan Stanley, self-driving cars will be commonplace by 2025.

So talks about technology aside, let’s dive into the ethics and philosophy behind these vehicles, which is what this infographic is about.

The Laws of Robotics

In 1942, Isaac Asimov, sci-fi author and professor introduced the three laws of robotics.

  • The First Law states that a robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • The Second Law outlines that a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • The Third Law states that a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
  • He later added a Fourth Law, also called the Zeroeth Law; a robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Can the clear rules-based code of a computer handle the nuances of ethical dilemmas?

Let’s take a look at a few hypothetical scenarios:

The Trolley Problem

You are the driver of a trolley that has broken brakes. Fortunately you still have the ability to steer the train from the main track to an alternate track. You can see the two tracks right ahead of you:

  • The main track has five workers
  • The alternate track has one worker

Both tracks are in narrow tunnels so whichever direction you choose, anyone on that track will be killed. Which way will you go? Would you let the train continue down the main track and kill five, or will you switch it onto the alternate track to kill one?
Most people respond to the Trolley Problem by saying they would steer the train onto the alternate track because their moral intuition tells them that it’s better to kill only one person rather than five.

Now for a little modification to this hypothetical scenario.

The runaway trolley is speeding down a track about to hit five people. But this time, you’re on a bridge that the train is about to pass under. The only thing that could stop the trolley is a very heavy object. It just so happens that you are standing next to a very large man. Your only hope to save the five people on the tracks would be to push the large man over the bridge and onto the track. How would you proceed?
Most people strongly oppose this version of the problem – even the ones who had previously said they would rather kill one person as opposed to five. These two scenarios reveal the complexity of moral principles.

The Tunnel Problem

You are traveling on a single lane mountain road in a self-driving car that is quickly approaching a narrow tunnel. Right before you enter the tunnel, a child tries to run across the road but trips right in the center of the lane, blocking the entrance to the tunnel. The char only has two options:

  • To hit and kill the child
  • Swerve into the wall, thus killing you

How should the car react?

Now that the age of self-driving cars have dawned upon us, this new technological innovation has given ethical dilemmas such as the tunnel and trolley problems a new relevance.

Hypothetical scenarios like the Tunnel Problem present some of the real difficulties of programming ethics into autonomous vehicles. In a survey asking how people would like their car to react in the Tunnel Problem scenario, 64% of respondents would continue straight and kill the child. 36% would swerve and kill the passenger.
But who should get to decide?

44% of those surveyed felt that the passenger should make major ethical decisions. 33% felt that lawmakers should be the ones who decide, 12% felt that the decision should lie with the manufacturers and designers. The remaining 11% responded with “other.”

Ethics is a matter of sharing a world with others, so building ethics into autonomous cars is a lot more complex than just formulating the “correct” response to a set of data inputs.

Here’s one last ethical scenario for driverless cars.

The Infinite Trolley Problem

The Infinite Trolley Problem, introduced by autonomous vehicle advocate Mitch Turck, is where a single person is on the tracks. This person can easily be saved by simply halting the trolley, but that would inconvenience the passengers. So for this variant, the question is not “would you stop to save someone” but rather, “how many people need to be on board the trolley for their inconvenience to be valued more than a single life.” This variant points out to the fact that given the current number of vehicular fatalities, waiting for self-driving cars to be 99% (if not perfectly) safe disregards the fact that many of these accidents could be prevented once the fatality rate for self-driving vehicles merely dips below that of physically-manned vehicles, even if that is still a nonzero statistic.

Is waiting for perfection worth it?

Now you’re the self-driving car – how will you judge?

Enter the Moral Machine:  a platform developed by MIT used for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.

The Moral Machine shows you moral dilemmas, where a driverless car must choose the lesser of two evils (such as killing two passengers or 5 pedestrians).  As an outside observer, you judge which outcome you think is more acceptable.  You can see how your responses compare with those of other people.

Ready?  Start judging.

Click to start the Moral Machine

 

Source:  http://blog.cjponyparts.com/2016/01/ethical-dilemma-self-driving-cars-robotics

The post The Ethical Dilemma: Self-Driving Cars and the Laws of Robotics appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/ethical-dilemma-self-driving-cars-laws-robotics/feed/ 0 9555
Digital Natives and the Learning Divide http://www.relativelyinteresting.com/digital-natives-learning-divide/ http://www.relativelyinteresting.com/digital-natives-learning-divide/#respond Fri, 28 Jul 2017 16:31:41 +0000 http://www.relativelyinteresting.com/?p=9406 Relatively Interesting -

Ever since the term digital native (and its “opposite” digital immigrant) appeared on the radar back in 2001, the question on everyone’s minds is whether there really is a difference in how the former’s brains are wired and, if there is, what are the implications for teaching and learning. Are adult learning and education really […]

The post Digital Natives and the Learning Divide appeared first on Relatively Interesting.

]]>
Relatively Interesting -

Ever since the term digital native (and its “opposite” digital immigrant) appeared on the radar back in 2001, the question on everyone’s minds is whether there really is a difference in how the former’s brains are wired and, if there is, what are the implications for teaching and learning. Are adult learning and education really so different?

Some would say yes. Others would say no. There is no definitive answer. But research has come a long way since the original publications on the matter, and now we have a clearer picture of how the new generation is different from the previous one. Spoiler alert: it’s about confidence with technology and not much else.

Going back to the theory, an approximate cutoff date for determining whether a person is a native or an immigrant is 1983. This is based on the idea that before 1983, the primary entertainment source for individuals under 18 was just watching TV. But after watching TV, entertainment came from multiple sources, helping them to develop multitasking skills as they switched between the TV and other entertainment sources with ease. This means that this generation was the first to have “hyperlinked” minds.

Putting this in the context of learning and education, this means that traditional models of learning by rote just simply aren’t fast enough for the natives. They need more dynamic models for learning to take place. And this is where technology comes in. Where else can teachers have the world’s knowledge at their fingertips but the internet? And it’s more than just an electronic book. Just think of all the wonderful things that tech-savvy teachers can do to help others learn by using the amazing range of software packages available; a large number of which are freeware simply because the world loves helping people learn.

If the native versus immigrant theory were to be believed, using these tools to complement learning would be useless for immigrants who need one source of information if they are ever going to retain anything. Yet, this doesn’t appear to be the case. Yes, adult learners are generally more resistant to using technology for learning, but that doesn’t mean that they can’t learn using them. It’s a shift in paradigm for them; their expectations of learning have been altered slightly. Let’s remember that changing minds in society takes some time. The recent furore over Jodie Whittaker’s casting as Dr. Who is a testament to that. Accepting changes in education will probably take just as long. Anything that isn’t learning by rote, with lots of homework, isn’t going to be taken seriously, at first. Obviously, the new generation can’t learn if they’re playing all day or doing projects instead of sitting down at a desk and learning the 3 Rs. Or can they?

Whatever generation you look at, there seems to be a common factor: More learning takes place if the learner is enjoying it, if they’re engaged, and if they see the relevance to their lives. Natives, who in theory spend their lives interacting with technology, are engaged when learning by using different technology complements. But they also focus well when they know they have an important exam on the topic. Or even when they collaborate. Adults might not feel as comfortable with an onslaught of technology in the classroom — especially without careful scaffolding by the teacher — but again, they become involved when it’s something of special importance or relevance to them.

Forget catering to alleged hyperlinked minds, the key is to engage learners. And for that, we need to turn to something that is fundamental to us humans: we are social beings. We like collaborating. We like working with a team to beat the other team. Learning by rote and lecturing does nothing to take advantage of this. It never has. Perhaps one of the ways to engage digital natives and their digital immigrant predecessors would be to take advantage of the wide range of competitive games that are so ubiquitous in popular culture and introduce them into the learning environment. No major overhaul needed; just some good, focused teaching. Who needs to reinvent the wheel when a small adjustment of a tire makes it ideal?

 

The post Digital Natives and the Learning Divide appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/digital-natives-learning-divide/feed/ 0 9406
How Political Conspiracy Theories Are Born: 3 Case Studies http://www.relativelyinteresting.com/political-conspiracy-theories-born-3-case-studies/ http://www.relativelyinteresting.com/political-conspiracy-theories-born-3-case-studies/#respond Tue, 25 Jul 2017 17:32:51 +0000 http://www.relativelyinteresting.com/?p=9399 Relatively Interesting -

Online communities, forums, and social media have irrevocably changed the nature of political discourse. Indeed, they can facilitate a revolution. If you don’t believe me, look at what happened in Ukraine in 2014: about half of all Ukrainians learned about and organized protests on social media. But what happens when the narrative is utterly wrong […]

The post How Political Conspiracy Theories Are Born: 3 Case Studies appeared first on Relatively Interesting.

]]>
Relatively Interesting -

Online communities, forums, and social media have irrevocably changed the nature of political discourse. Indeed, they can facilitate a revolution. If you don’t believe me, look at what happened in Ukraine in 2014: about half of all Ukrainians learned about and organized protests on social media.

But what happens when the narrative is utterly wrong — when fringe political commentators and social media bots dictate the national discussion? That’s when you run into problems.

As much as the phrase “fake news” may incite eye rolls, it is a serious problem in today’s political climate. Here are three case studies of recent conspiracy theories that gained traction in the collective consciousness, and what we can learn from their proliferation:

The Assassination of Seth Rich

Source: Snopes.com

Last summer, during the presidential campaign, news broke from Washington D.C.: DNC staffer Seth Rich was murdered. According to local police, Rich’s death was the result of a failed robbery attempt. However, political pundits like Rush Limbaugh, Sean Hannity, and even Newt Gingrich argue that this was not a robbery attempt at all, but an assassination. The motive? Retribution. Some theorists argue that Rich was embittered by the corruption of the DNC and sought to expose its misconduct through WikiLeaks. For this, he supposedly paid the ultimate price.

While this narrative seems feasible at first glance, there is a serious lack of evidence to back it up. Nevertheless, commentators on various news outlets have reported on it as though it were verified fact, including Breitbart and Fox News. The most compelling evidence to support this interpretation of events are a series of oblique statements from Wikileaks founder Julian Assange, though this seems like tenuous grounding for a theory with such broad implications.

But the theory has spread, and it persists today.

The method in which this theory was spread can teach us a lot about how conspiracy theories can take hold. When conspiracy theorists are given the opportunity to share their views on mainstream media, their theories have more credence to the general public. This isn’t to say that radical speakers should not be given a chance to share their views in a public forum, but that we should be careful to analyze the credibility behind such claims — regardless of the platform.

Fake Protestors in Austin, TX

Another theory that made headlines during the last presidential campaign was born on Twitter, when user Eric Tucker posted:

Source: Snopes.com

Tucker made this assertion after spotting many buses near the site of political protests in Austin, Texas. For conservative media, this made for an irresistible story. It de-legitimized protests in the area while highlighting the dishonest tactics from out-of-touch Democrats. Hundreds of conservative news sites and blogs shared the story, and it quickly became a viral story.

However, this theory is verifiably false. Buses were not used to transport “protest actors.” Just two days after the tweet, Tucker himself wrote on his personal blog that he jumped to an erroneous conclusion. He clarified, “I now believe that the busses that I photographed on Wednesday, November 9, were for the Tableau Conference 2016 and had no relation to the ongoing protests against President Elect Trump. This information was provided to me from multiple professional journalists.”

Nevertheless, this retraction got substantially less coverage by conservative news outlets than the theory posited in his original tweet. This demonstrates how media bias can create conspiracy theories. If a theory supports a publication’s bias, they are likely to support it. This form of bias can promote patently false narratives.

The Pizzagate Scandal

 

Pizzagate conspiracy

Of course, it is impossible to discuss recent political conspiracy theories without discussing the biggest of them all: Pizzagate. The core of the narrative is that Clinton-linked bigwigs participated in a secret pedophile ring running out of a D.C. pizzeria. While discussing (and disproving) the many claims associated with this vast conspiracy theory is beyond the scope of this article, you can read an in-depth rebuke of it on Snopes.

Like the other cases in this article, this theory was perpetuated by fringe news sites on social media, where it grew rapidly. Infowars owner Alex Jones spoke passionately against the supposed pedophile ring, claiming in a (since removed) video “When I think about all the children Hillary Clinton has personally murdered and chopped up and raped, I have zero fear standing up against her.” One upset individual, Edgar Welch, wanted to uncover more information about the supposed illegal activities, so he entered the pizzeria with an assault rifle and fired multiple shots inside. The theory had obviously gotten out of control.

When it became clear that the Pizzagate conspiracy theory was not valid, some collective head-hanging ensued. Some conservative outlets retracted stories concerning the story. Alex Jones publically apologized for his coverage of it and distanced himself from the subject. Welch stated that he was “truly sorry for endangering the safety of any and all bystanders who were present that day” and will be serving four years in jail.

Pizzagate got out of hand for many reasons. Facts rarely matter in the era of quick news blasts on social media, where truth is measured in likes and shares. When internet commentators delved deeper into the subject, they clinged onto anything that supported the story while ignoring any contradicting evidence. Social media bots and compromised accounts played a role in elevating the conspiracy to the national consciousness.

Furthermore, as many Facebook users discovered earlier this year, even innocuous games and quizzes can put your account at risk, turning your account into a puppet for someone with an agenda.

There are some important lessons to glean from these case studies:

  • Investigate extraordinary claims. As Carl Sagan famously said, “Extraordinary claims require extraordinary evidence.” If you are skeptical of a claim, rationally investigate it, and do not discount contradicting evidence.
  • Listen to news sources from both sides of the political spectrum. Political bias may keep certain outlets from disclosing key facts concerning recent events. Read news from both sides of the aisle and come to your own conclusion.
  • Be wary when reading news on social media. Social media bots and compromised accounts have created an environment where fake news thrives. Refrain from sharing conspiracy theories without investigating them yourself. Who knows? You might help prevent the next Pizzagate.

What are your thoughts? Are there any other recent political stories that you think should be on this list? Respond in the comments below!

By Bob Hand

The post How Political Conspiracy Theories Are Born: 3 Case Studies appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/political-conspiracy-theories-born-3-case-studies/feed/ 0 9399
Are The Brains Of Millennials Wired Differently? http://www.relativelyinteresting.com/brains-millennials-wired-differently/ http://www.relativelyinteresting.com/brains-millennials-wired-differently/#respond Thu, 08 Jun 2017 17:00:50 +0000 http://www.relativelyinteresting.com/?p=9339 Relatively Interesting -

Unless you’ve been living under a rock, you’ll know that millennials are simply young people who reached adulthood in the early 21st Century. It’s difficult to categorize exactly which age groups that spans; but let’s for argument’s sake say that this demographic covers those born roughly between the early 80s and late nineties. Otherwise known […]

The post Are The Brains Of Millennials Wired Differently? appeared first on Relatively Interesting.

]]>
Relatively Interesting -

Unless you’ve been living under a rock, you’ll know that millennials are simply young people who reached adulthood in the early 21st Century. It’s difficult to categorize exactly which age groups that spans; but let’s for argument’s sake say that this demographic covers those born roughly between the early 80s and late nineties. Otherwise known as Gen Y, this lot incorporates many of the people who’ll be reading this right now. So, how are we different to Gen X who came before us? It’s not just that their hair is gray and ours is only just starting to pepper. It goes a little bit beyond that. But could you really go so far as to say our brains are actually different?

It’s All About Technology

As always, we can rely on science to give us all the answers we need. And, it should come as really no surprise to nobody, but technology plays a huge part in all of this.

Younger brains are craving multi-sensory approaches to life, and technology can give them just that. Additionally, because of this “sensory overload”, we’re better at filtering information and processing it in ways that make us ever more productive. Social sharing makes us feel good – so we’ve learned to harness that power across all aspects of life. But how have we managed to get our approach to technology to actually change our brains?

Look At The Ways Our Use Of Tech Is Manifesting

We often hear a bunch of negative stuff about how technology is destroying our brains. A study of Canadians found that the average human attention span is now shorter than that of a goldfish, sadly all down to smartphones, apparently. In 2000, we could concentrate for 12 seconds. Now, that’s down to just eight – as opposed to a goldfish’s nine.

However, all is not lost. Microsoft actually reckons that millennials have much more capacity to multitask. While they can’t concentrate for long periods, the more digital a person’s lifestyle – often synonymous in itself with being a millennial – the more they can direct their attention to multiple sources at once, and give their attention to lots of places in intermittent bursts. This is echoed by studies by the National Library of Medicine here in the US, highlighting that 79% of their respondents regularly used dual screens, using their phones to Snapchat at the same time as watching TV, for example.

This isn’t just allowing us to create entirely new past times, though. Millennials are using their new-found reliance on technology to revive hobbies from the past. When you think of bingo, a lot of people might think of the older generation. However, the emergence of online gaming has proven that all you really need to revitalize something old like bingo is a fresh new take on it. Chat rooms to mimic the obsession with messaging apps. Celeb endorsements to appeal to the cross-platform marketing norms of our century. And, of course, apps, for true mobility. Bingo is just one example of how millennials approach age-old activities with a spin of their own.

Where Will This Take The Future?

It’s not all fun and games, literally. While many of the baby boomers might criticize us for not owning houses because we eat too much avocado (yeah… really), we’re actually redefining the way we live for the better.

In our digital age, millennials are increasing connectivity. The world is becoming smaller, and as such, our possibilities only grow larger. We travel more, and we’re creating jobs which allow us to do so, too – effectively transforming the workplace. Remote working is on the increase, a work-life balance is becoming ever more possible, and soon, millennials will be in the majority at work, armed with their ever-expanding skills and knowledge.

And as the distance between us gets smaller, so too do the differences between us all. Relationships are far easier to foster in faraway places, meaning that we can now much more easily interconnect, on a global level, transcending cultures as we go.

Millennials tend to be progressive, and always looking for new ways to fix the problems of old. Only now, for the first time, they may well have the information and the abilities to explore the possibilities to make real change, at their very fingertips. Stay tuned.

 

Image Sources:  “brain” (CC BY-NC 2.0) by TZA and “Millennials” (CC BY 2.0) by hahn.elizabeth34

The post Are The Brains Of Millennials Wired Differently? appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/brains-millennials-wired-differently/feed/ 0 9339
Do the Vault 7 Leaks Confirm IoT Conspiracy Theories? http://www.relativelyinteresting.com/vault-7-leaks-confirm-iot-conspiracy-theories/ http://www.relativelyinteresting.com/vault-7-leaks-confirm-iot-conspiracy-theories/#respond Fri, 10 Mar 2017 12:48:48 +0000 http://www.relativelyinteresting.com/?p=9072 Relatively Interesting -

WikiLeaks has set the world ablaze with the release of 8,761 documents under the codename “Vault 7.” It reveals CIA activity that includes the creation of malware, viruses, and exploits that permit the intelligence agency to commit widespread surveillance. For some readers, these revelations validate some long-held theories. Considering the implications of these documents, some […]

The post Do the Vault 7 Leaks Confirm IoT Conspiracy Theories? appeared first on Relatively Interesting.

]]>
Relatively Interesting -

WikiLeaks has set the world ablaze with the release of 8,761 documents under the codename “Vault 7.” It reveals CIA activity that includes the creation of malware, viruses, and exploits that permit the intelligence agency to commit widespread surveillance. For some readers, these revelations validate some long-held theories.

Considering the implications of these documents, some sensationalism is to be expected. However, it is important to separate confirmed information from speculation. Here are three conspiracy theories surrounding the Internet of Things, and information from the Vault 7 publication that either validate or invalidate this speculation:

Theory 1: Your TV Can Spy On You – Confirmed

In George Orwell’s book, 1984, the denizens of Oceania were spied upon by their government through “telescreens” — televisions that are impossible to turn off and double as surveillance tools for the government. In reality, smart TVs may serve the same purpose.

When Samsung first hit the market with smart TVs, some consumers were alarmed at a disclaimer in the device’s privacy policy. In the policy, the company warned consumers to refrain from speaking about personal or sensitive information, since that data would be transmitted to a third party. Samsung later clarified, stating that voice recognition could be turned off, but this event spawned dozens of theories about the dangers of smart TVs.

Apparently, the paranoia is justified. According to WikiLeaks documents, the CIA and MI5 worked together on a hack called “Weeping Angel” that specifically targeted Samsung smart TVs. With this hack, it was possible for agents to record conversations. It even sought to implement a “fake-off” feature, where the television would appear to turn off but continue to covertly record audio. A firmware update in June 2014 rendered it impossible to transmit the hack over the internet, though it was possible to affect specific devices through a thumb drive. The CIA attempted to block future firmware updates from removing the hack.

While we will have to wait for more leaked documents to learn more about Weeping Angel (and potentially other hacks), the actions by intelligence agencies demonstrate a clear disregard for consumer privacy. As noted by Vijay Kanabar, an associate professor of Boston University’s Online Computer Information Systems program, “There is a culture of trust here in this country that is pretty easy to exploit.” While Americans may show a general disregard for the importance of cybersecurity, it is galling that the U.S. government would commit cyberattacks on its citizens. It is unlikely that these agencies were content with only hacking Samsung TVs, and upcoming leaks will reveal the full scope of this program.

Theory 2: Car Cyberattacks Can Be Used to Assassinate Political Enemies – Possible

Four years ago, journalist Michael Hastings died under mysterious circumstances. The New York native was investigating a privacy lawsuit brought against the DoD and FBI. Hours before crashing into a tree in Los Angeles at a high speed, Hastings sent an email indicating that he was “going off the radar for a bit.” The circumstances of the crash prompted speculation that an intelligence agency may have remotely seized control of his car. Proving that his death was an assassination rather than an accident, however, is impossible.

This account resurfaced very recently in light of the Vault 7 revelations. In a press release, WikiLeaks notes that the Mobile Devices Branch (MDB) of the CIA was looking to affect control systems of modern cars and trucks. While the purpose of these hacks are not specified, pundits have noted that the desired control would give them the capacity to perform “nearly undetectable assassinations.”

Definitive proof that the CIA were successful in their attempts to remotely hack automobiles has not been released. As noted by Scientific American, remotely hacking a vehicle is an incredibly difficult task. Nevertheless, the agency has demonstrated a clear eagerness to make it a reality. The idea that cyberattacks are being used to assassinate political enemies is plausible, but not proven.

Theory 3: The Government Can Track You Through Your Phone – Confirmed

This theory is confirmed, but not in the way some theorists have speculated. In mid-2015, some internet users discovered a strange chip in their Samsung Galaxy S4s. People speculated that this chip was used to collect data on users, including photos and personal data. Actually, that little chip is an NFC chip, and is used only to facilitate mobile pay. Regardless, there are many other ways that internet users can be tracked — so nothing you do on your phone online should be presumed to be private.

Recently, it was revealed that the CIA has actually sought the use of mobile phones for real-time surveillance — not mere internet tracking. The MDB have the capacity to infect devices and have them send the user’s location, calls, and text messages to remote agents. Furthermore, they can activate the camera or microphone of a device, in order to monitor a target in real-time. Work to keep these hacks operational is continuous, since firmware updates necessitate new workarounds. A glimpse into the inner workings of this process can be found here.

While some use of consumer data to facilitate data-driven marketing is to be expected, outright mass surveillance was something relegated to fringe conspiracy theory forums — until recently.

Is widespread mobile surveillance a goal of the CIA? Or were these hacks limited to specific targets deemed to be a threat?

The scope of this program may come to light as the remainder of the Vault 7 documents are released.

 

The post Do the Vault 7 Leaks Confirm IoT Conspiracy Theories? appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/vault-7-leaks-confirm-iot-conspiracy-theories/feed/ 0 9072
Is Big Data the Answer to Epidemics? http://www.relativelyinteresting.com/big-data-answer-epidemics/ http://www.relativelyinteresting.com/big-data-answer-epidemics/#respond Thu, 09 Feb 2017 19:17:27 +0000 http://www.relativelyinteresting.com/?p=8934 Relatively Interesting -

Researchers around the world are focused on improving our capacity to predict and prevent epidemics. These efforts are almost universally founded on the concept of big data. Given the failings of big data to predict the outbreak of Ebola in recent years, critics are concerned that this approach may be inherently flawed. Is this true? […]

The post Is Big Data the Answer to Epidemics? appeared first on Relatively Interesting.

]]>
Relatively Interesting -

Researchers around the world are focused on improving our capacity to predict and prevent epidemics. These efforts are almost universally founded on the concept of big data. Given the failings of big data to predict the outbreak of Ebola in recent years, critics are concerned that this approach may be inherently flawed. Is this true?

Let’s analyze the situation:

Rising as a buzzword in the early 2000’s, “big data” refers to the availability of large sets of data that can be used to make inferences. Using big data can lead to a lack of nuance, and often depersonalizes matters — which makes it a controversial approach in some fields. While corporations seeking profit often use customer information for logistics or advertising, big data can also be used for humanitarian purposes.

Most notably, organizations across the world have been using big data to further disease prevention efforts. An example of this can be found in the international response to the 2014 Ebola outbreak in West Africa. Using big data, public health workers were able to track the spread of the disease in real-time, in a process known as disease mapping.

Disease mapping allows us to identify where a disease is likely to spread. This gives humanitarian organizations an idea of where they should send resources and workers. Treatment can be tricky, though researchers have found promise in drugs already on the market, such as sertraline. The Ebola outbreak has been contained as of January 2016, though flare-ups may still occur.

Ironically, while the practice of using big data can be heralded for containing the epidemic, it can also be blamed for failing to foresee the outbreak in the first place. During the panic, the CDC predicted that there would be 1.4 million cases of the fatal disease. The World Health Organization made a much closer estimate at 20,000 cases, but was still off the mark by a considerable percentage. Actually, there have been approximately 13,000 cases of Ebola in the past two years.

Ebola Virus (Wikimedia Commons)

How did we get this information so wrong?

Clearly, there is a problem with placing too much faith in big data. While it can be useful in some applications, we need to be aware of its limitations — or, rather, our lack of data. Our inferences can only be accurate if we have a large amount of reliable data.

The key to improving this approach lies in better data collection techniques. As surveillance technology and health reporting becomes more advanced, more useful data will be available to researchers. Factors that make an area predisposed to an infectious outbreak, such as climate, sanitation, and water supply, must also be considered when gauging the potential for an outbreak. Accurate predictions allow us to effectively stifle outbreaks before they occur.

In the words of Andrew McAfee, “the world is one big data problem.” Big data could be the solution to epidemics in the future, but collecting the necessary data is a difficult task. While high-profile errors have caused pundits to critique the use of big data in disease prevention, researchers are developing new ways to predict and prevent diseases every day. As relevant data becomes more readily available, our ability to prevent epidemics in the future will improve.

 

The post Is Big Data the Answer to Epidemics? appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/big-data-answer-epidemics/feed/ 0 8934
5 Things You Didn’t Know you Could use Bitcoin for http://www.relativelyinteresting.com/5-things-didnt-know-use-bitcoin/ http://www.relativelyinteresting.com/5-things-didnt-know-use-bitcoin/#respond Fri, 13 Jan 2017 18:44:03 +0000 http://www.relativelyinteresting.com/?p=8829 Relatively Interesting -

If you don’t know what Bitcoin is yet, then you must have been living under a rock for the last year. In 2016 the cryptocurrency that will perhaps take over from traditional methods of payment online reached new heights. Now, more places than ever accept the e-currency, including your favourite fast food and coffee joints […]

The post 5 Things You Didn’t Know you Could use Bitcoin for appeared first on Relatively Interesting.

]]>
Relatively Interesting -

If you don’t know what Bitcoin is yet, then you must have been living under a rock for the last year. In 2016 the cryptocurrency that will perhaps take over from traditional methods of payment online reached new heights. Now, more places than ever accept the e-currency, including your favourite fast food and coffee joints like Subway and Starbucks. Want to know what other interesting things you can use Bitcoin for? Read on…

Bitcoin Casinos

Long gone are the days of walking into a casino and cashing your notes in for chips. Bitcoin casinos are the future of gambling. Now you can use the virtual currency to double down on blackjack, turn the wheel on roulette, or spin the reels on slots. Enter a Bitcoin website today and see just how many games have been developed to be played with this futuristic currency.

The First Ever Apple Macintosh

via GIPHY

Why not appreciate just how far technology has come by purchasing a pioneering piece of tech history with a pioneering currency?  The first ever Apple Mac was released in 1984.  This nostalgic item can be picked up for around 6.34 BTC at the time of writing.

A Flight into Space

Sir Richard Branson is hoping to start flying wealthy people into space soon on SpaceShipTwo of his company Virgin Galactic. The billionaire tycoon is an avid fan of bitcoin and believes that “the currency of the future should be used to pay for the travel of the future.” Celebrities such as Lady Gaga, Justin Bieber, Leonardo DiCaprio, and  Ashton Kutcher have already signed up to make the intergalactic voyage, and you could to for the princely sum of  291.41 BTC.

Bitcoin Goat

 

Via https://www.reddit.com/r/Bitcoin/comments/3d0q8g/buy_goat_for_btc/

Bitcoin isn’t just for buying futuristic things or spending on internet products. A lot of regular land-based companies now accept the currency. Pubs, restaurants, cafes, and now even goat salesmen have started letting people pay with their virtual wallets. Bitcoin goat has now become an internet craze, and is doing wonders to promote the e-currency:  because you never know when you’ll need a goat.

A Private Island in Micronesia

Ryan Weaver, a prosperous software developer and bitcoin enthusiast is hoping to sell his idyllic private island off Pohnpei for bitcoins. The computer expert reckons that he might be able to cut a deal when bitcoin prices surge again. Weaver wants to sell the island for around 968 bitcoins, which is quite a reasonable price when you think that only seven years ago 10,000 bitcoins were used to purchase two pizzas.

Along with these weird and wonderful bitcoin purchases, you could also use your digital currency to buy mammoth tusks, spy coins, or even a gold mine in Canada. Change your money into bitcoin and see what you can do with this growing monetary phenomenon.

The post 5 Things You Didn’t Know you Could use Bitcoin for appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/5-things-didnt-know-use-bitcoin/feed/ 0 8829
Saving the environment, one IoT device at a time http://www.relativelyinteresting.com/saving-environment-one-iot-device-time/ http://www.relativelyinteresting.com/saving-environment-one-iot-device-time/#respond Wed, 11 Jan 2017 18:19:11 +0000 http://www.relativelyinteresting.com/?p=8820 Relatively Interesting -

The Internet of Things (IOT) will change the world we live in. The vast network of everyday items connected to each other through the internet will revolutionize our daily life and alter our economy. But how will it affect the environment? IoT presents some environmental concerns, such as the raw materials used to make the […]

The post Saving the environment, one IoT device at a time appeared first on Relatively Interesting.

]]>
Relatively Interesting -

The Internet of Things (IOT) will change the world we live in. The vast network of everyday items connected to each other through the internet will revolutionize our daily life and alter our economy. But how will it affect the environment?

IoT presents some environmental concerns, such as the raw materials used to make the devices and the energy used to run them and store the data they create. These issues could be mitigated by designing with the environment and sustainability best practices in mind and using renewable power sources.

IoT will take a toll on our environment in some ways, but in others, it will provide much-needed assistance. Several IoT technologies are designed specifically to address environmental issues and could help slow or reduce the impact from climate change.

Energy Efficiency

A smarter home or business is one that uses less energy and, therefore, is better for the environment while saving you money. A recent study found that IoT could reduce energy consumption by 10 percent.

When everything in the home is connected to the internet, it can automatically shut itself off or reduce its power usage when it isn’t needed. IoT could even synchronize energy usage across the grid to avoid too many people using too much power at one time. Many of these devices can learn on their own when to adjust the amount of power they pull.

This applies to all sorts of devices from thermostats, lighting and water heaters to refrigerators and washing machines. Even if your home is already using energy efficient devices, connecting the internet of things can still increase their productivity.

Smart Farming

The role that data and “smart” technology play in farming is on the rise. And thanks to that technology, so are yields, while energy and water use has been reduced.

IoT can help farmers gather data about their soil, crops, livestock and the weather. It can then use this data to help farmers decide when and where to plant. Farmers use sensors to gather this data, as well as drones and tractors equipped with an Internet connection and the ability to drive themselves.

We will need to produce even more food in the coming years, and IoT can help that process be more efficient. IoT can help manage irrigation so farmers can use less water and get a better yield from less land.

Environmental Sensors

Environmental sensors can monitor soil quality on farms and have the potential for many other uses as well. They can be used to track water and air quality too. This data can be combined to provide an extremely accurate and up-to-date picture of our environment.

These sensors can also be used to monitor forests for fires and watch for natural disaster events such as earthquakes and tsunamis.

IoT could help us understand our environment better and help pinpoint problem areas we need to fix.

Wildlife Monitoring

IoT is already helping to protect endangered species and monitor wildlife and will continue to play a role in this area.

Collars — and one day, sensors placed under the skin — equipped with IoT technology can help people monitor endangered animals, study their habits and learn more about how to protect them. One project even used these collars to track lions and alert cattle farmers to their whereabouts in order to avoid deadly run-ins.

Lion tracked with collar

Drones have also been used to keep an eye on wildlife to help protect them from poachers. The drones are less invasive than other methods and make it easier to consistently monitor animal populations.

Technology has the potential to help us protect our environment and IoT is one of the most promising technologies in this area. Whether it’s monitoring air, water and soil quality, tracking wildlife or reducing energy usage, IoT has an interesting future ahead of itself and could play a prominent role to reduce our environmental impact.

 

 

The post Saving the environment, one IoT device at a time appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/saving-environment-one-iot-device-time/feed/ 0 8820
The Evolution of the Credit Card: From Paper to Plastic to Virtual http://www.relativelyinteresting.com/evolution-credit-card-paper-plastic-virtual/ http://www.relativelyinteresting.com/evolution-credit-card-paper-plastic-virtual/#respond Fri, 23 Dec 2016 14:12:07 +0000 http://www.relativelyinteresting.com/?p=8794 Relatively Interesting -

These days, credit cards are nearly ubiquitous. In fact, there were an estimated 18.08 billion credit and debit cards in circulation in 2015—that’s nearly 2.5 cards for every person alive on the planet. It’s hard to think back to a time when we didn’t get daily offers for new cards in the mail, or when […]

The post The Evolution of the Credit Card: From Paper to Plastic to Virtual appeared first on Relatively Interesting.

]]>
Relatively Interesting -

These days, credit cards are nearly ubiquitous. In fact, there were an estimated 18.08 billion credit and debit cards in circulation in 2015—that’s nearly 2.5 cards for every person alive on the planet.

It’s hard to think back to a time when we didn’t get daily offers for new cards in the mail, or when we didn’t see a commercial for a credit card every time we turn on the TV. Maybe that’s why it’s so strange to think that the credit card as we know it has only existed for a matter of decades.

Let’s go back to the beginning, and see just how far we’ve come since then.

1950s and ‘60s: The Beginning

Today, the payments industry is dominated by a select few brands. However, in the early years of charge cards, the market had a lot more competition.

Introducing Diners Club

According to legend, the idea for the world’s first independent charge card company, Diners Club, came to co-founder Frank McNamara while dining with clients in 1949. After realizing he’d forgotten his wallet, McNamara devised the idea of a multipurpose charge card which could be used at a number of different locations.

Diners Club launched the following year, accepted by an initial 27 participating restaurants. By the end of 1951, the card grew from 200 cardholders to 42,000.

American Express Break into the Market

Through most of the 1950s, Diners Club remained the only major card brand on the market. But by the end of the decade, a new competitor appeared—American Express.

American Express was actually founded a full century before Diners Club as an express mail service; however, as the years passed, the company gradually expanded into financial services. Eventually, the company gained recognition as a major financial player with the introduction of an innovative product in 1891—the traveler’s cheque.

American Express issued their first charge card in 1958. Though the cards were initially printed on paper like the early Diners Club cards, Amex issued the world’s first embossed plastic cards in 1959.

The Next Generation: BankAmericard, Master Charge & Eurocard

The same year that American Express launched their first card, so too did Bank of America. Called the BankAmericard, the company took a then unconventional approach, launching the card with an unsolicited mass mailing of 60,000 cards to Bank of America customers.

The card quickly expanded into several markets in California, which led to early issues including a high delinquency rate, as well as the advent of credit card fraud. By the mid-60s, however, the situation was more stable. The card had expanded throughout the state, and was by that point, accepted by tens of thousands of different merchants.

In direct response to the BankAmericard, several other California-based banks, including Wells Fargo and Bank of California, decided to launch their own interbank card in 1966—the Master Charge card. This card was unique, in that it was the first card not branded and issued by a single bank. Instead, Master Charge was simply a card network, providing cards to multiple issuing banks within its network.

Meanwhile in 1964, a bank in Sweden began offering their own international charge card, known as the Eurocard. Eurocard formed a strategic partnership with Master Charge in 1968, enabling cardholders from either network to carry out transactions using the other’s infrastructure.

Around that same time, Bank of America started licensing their cards to international banks. The BankAmericard became known under a number of different international names, including Chargex (Canada), Carte Bleue (France) and the Barclaycard (UK).

1970s and ‘80s: The Modern Market Starts to Take Shape

In 1970, realizing that their existing system of licensing the BankAmericard to different banks was growing messy and over-complicated, Bank of America gave up direct control of the card. BankAmericard then became a separate card network much like Master Charge. This network was first known as National BankAmericard Inc., then as the International Bankcard Company (IBANCO) in 1974.

In 1976, the company again changed their name, as well as their brand. Now, rather than offering their cards under a myriad of names around the world, IBANCO consolidated their brands under one international name—Visa. Three years later, Master Charge changed their name as well, becoming MasterCard.

Finally, retail giant Sears entered the market in 1985. Wanting to expand their already extensive financial portfolio, the company issued the first Discover cards in 1986 through the company’s proprietary bank, the Greenwood Trust Company. The card quickly gained popularity owing to its low merchant fees, lack of an annual fee and cashback bonuses. The company split from Sears in 1993.

Today’s Credit Innovations

Credit cards have changed a great deal since the early printed paper cards, as has the industry itself. Once prominent names like Carte Bleue and Access have vanished, absorbed by other networks, while the once-prominent Diner’s Club maintains only a small following.

Currently, Visa holds a considerable lead over its competitors in terms of market share. In the U.S. alone, consumers conducted $1.2 trillion worth of transactions using Visa credit cards in 2014 and nearly $1.3 trillion more with debit. American Express representing the next-highest figure at $684 billion. However, the fastest-growing name in the modern credit landscape is actually a relatively young entity—China’s UnionPay, which in 2015 accounted for 13% of total world card sales volume.

At the same time, there are a number of new technologies currently on the market, and the industry continues to evolve in order to keep pace with technological developments:

Mobile Wallets

Smartphone apps like Apple Pay and Samsung Pay utilize near field communication (NFC) technology to conduct transactions without the presence of a physical card. When shopping online, these ‘OS-Pay’ users can check out with a click of a button—no monotonous information entry needed.

In addition to convenience, security on these platforms is quite revolutionary. Not only do mobile wallets replace sensitive card information with difficult-to-hack tokens, but customers authorize each transaction with a fingerprint rather than a signature or PIN.

Biometric Authorization

Biometrics are quickly becoming a mainstay of payment security. Earlier this year, MasterCard started testing its Identity Check technology, allowing customers to authorize online payments using facial recognition technology. Additionally, companies like Zwipe began working toward bringing fingerprint-enabled payment cards to market for brick-and-mortar purchases.

Motion Code™

By the late nineties, most payment cards featured card security codes. When requested during the online checkout process, these three or four digit codes helped ensure the physical card was present at the time of purchase and sensitive card information hadn’t been stolen.

As a way of making these codes more secure, Oberthur Technologies introduced a new technology called Motion Code™ back in 2014. This replaces the printed code with a digital display, which will change the security code multiple times a day.

Looking to the Future

From paper charge cards to facial recognition, credit cards have came a long way in 66 years. Our technological capabilities are now near the point of outpacing our own ability to predict them even a few years down the road. It’s difficult to say where the payments industry will be even 10 years from now, but past experiences indicate we have many new and exciting developments in our near future.

Care to speculate on what the future holds for payments and credit card technology? Leave a comment below to join the conversation.

 

Article written by guest author Monica Eaton-Cardone. Monica shares her payment industry expertise at Chargebacks911.com.

Image sources:
Diners Club Card: http://www.moneypeach.com/history-of-credit-cards-timeline/
Amex Card: http://creditcardforum.com/blog/the-history-of-american-express-cards/
Bank Americard: https://cashcofinancial.com/2016/01/the-history-of-plastic-money/
Master Charge Card: http://www.creditcardscompare.co.nz/credit-cards/

 

The post The Evolution of the Credit Card: From Paper to Plastic to Virtual appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/evolution-credit-card-paper-plastic-virtual/feed/ 0 8794
How to Hack the Lottery http://www.relativelyinteresting.com/how-to-hack-the-lottery/ http://www.relativelyinteresting.com/how-to-hack-the-lottery/#comments Fri, 16 Dec 2016 15:43:02 +0000 http://www.relativelyinteresting.com/?p=8760 Relatively Interesting -

Are you one of those people who roll your eyes whenever you see the lottery advertised on TV? Seems no matter where you are in the world lotteries are advertised in exactly the same way; they always emphasize the fact that “it could be you”, but only if you enter. Of course this conveniently ignores […]

The post How to Hack the Lottery appeared first on Relatively Interesting.

]]>
Relatively Interesting -

Are you one of those people who roll your eyes whenever you see the lottery advertised on TV? Seems no matter where you are in the world lotteries are advertised in exactly the same way; they always emphasize the fact that “it could be you”, but only if you enter.

Of course this conveniently ignores the obvious fact that, statistically-speaking, it won’t be.

Take PowerBall, the US lottery that currently holds the world record for the biggest jackpot of all time ($1.58 billion).

The odds of you winning that are one in over 292 million, which is so infinitesimal you might as well not bother playing at all and just save your money.

Of course I’m not their target demographic, I’m not one of those starry-eyed “one day I’ll win the lottery” people. Let’s call them Group A.

I belong in Group B, one of the millions of rationally-minded individuals who see those astronomical odds and think, “there’s no way!”

But then there’s the third type of person, a very rare breed, admittedly, let’s call them Group C. And although Group C shares many traits with Group B, their conclusions differ dramatically.

While we of Group B see the lottery as folly, the mathematically-minded challenge junkies in Group C look at those staggering odds the same way an adventurous mountain climber might view K2 – no matter how high and difficult it may be, it can be conquered!

Lottery Enlightenment

One such person was Voltaire, one of the most influential and, often times, controversial thinkers of the French Enlightenment.

A gifted writer and polymath, Voltaire’s liberal views on racial equality and his repeated condemnation of Church and State put him at odds with the rigid, patriarchal power structures of the time. As a result, he often ran foul of authorities, spent time imprisoned in France’s infamous Bastille and at one point was forced into exile in Britain.

Just under three years later Voltaire was able to return to Paris, where, alongside a mathematician named Charles Marie de la Condamine, hatched a plan that would secure him enormous wealth while also humiliating the French establishment which had persecuted him.

The government, at that time, was in serious financial difficulty and this was having a knock-on effect on their ability to issue bonds. The city of Paris needed a way to bolster the value of its bonds and so the finance minister, Michel Le Pelletier Desforts, suggested they run a parallel lottery. The proceeds would then be used to bolster to the shortfall in bond value.

“Genius!”, though the French government.

Au contraire!”, thought Voltaire.

The lottery was run throughout Paris, but the value of bonds differed by district. So by linking the cost of bonds and lottery tickets together, this meant that it was significantly cheaper to buy lottery tickets in one district, compared to others.

Once they crunched the numbers it soon became apparent that all Voltaire and his syndicate had to do was buy up all the tickets in one district and corner the market completely.

Eventually Voltaire and his associates were brought to court, but the case was thrown out since it was ruled that they had done nothing illegal. So Voltaire was allowed to keep his vast wealth which, even in today’s money, would make him a multimillionaire.

Not content to rest on his laurels Voltaire used his money to invest in new ideas and industries, but that didn’t mean he became one of the aristocracy, on the contrary he continued to live, as he always did, provoking the rich and powerful – except now he was never in short of getaway money.

Luck of the Polish

Ok, I know what you’re thinking; nice story but what relevance does it have today? You can’t compare a bunch of bumbling French bureaucrates from the 1700s to contemporary lottery operators!

Today we have professional organizations with racks of computing power and legions of statisticians – surely nothing can slip past them!

Well not quite. In recent times we’ve seen numerous “lottery hacks” successfully beat these so-called unassailable systems for huge profits.

In 1992, for example, a Polish-Irish accountant formed a syndicate which managed to brute force the fledgling Irish Lottery, making off with millions.

Dublin native Stefan Klincewicz had been keeping a keen eye on the Irish Lotto which, at the time, was only four years old. Ireland being a small island with a relatively low population the operators felt it made sense that it was only a 6/36 game.

What Klincewicz realized, however, was that, due to the low odds and similarly low ticket price, once the jackpot rolled over to reach a certain value, a brute force attack would suddenly become profitable, allowing his 28-strong syndicate to bulk-buy tickets just like Voltaire done.

This gave them a massive 80% of winning tickets for a cost of just under £900,000IRL (this was several years before the adoption of the Euro) before officials finally realized what was happening. Of course by then it was too late. But in an ironic twist of fate one non-syndicate member also had a winning ticket, so the eventual jackpot was split. (Today the Irish Lotto is a much tougher 6/47 game, and the price has been increased several times.)

Number Crunchers

Meanwhile, across the pond, lotteries in North America were set to receive a similar fate, starting in Virginia where, in 1992, a group of Australians brute forced the state lottery, buying up a total of 7 million different number combinations before making off with $27 million. Of course in this instance the amount of syndicate members was much higher, 2,500 in total, meaning they each received just over ten grand – still, not bad for a days “work”, eh?

Now it’s one thing to spot something and take advantage of it, it’s quite another, however, when the game designers are aware of a flaw, but decide not to do anything about it. This is what happened in 2012 when a bunch of MIT students discovered that brute force ticket buy would always result in a profit of between 15 to 20%. Time and time again they milked the lottery but the weird thing this time, though, was that the Massachusetts State Lottery had also known this for seven whole years but hadn’t bothered to do anything about it! Only after the syndicate won a huge jackpot did they decide to take action and discontinue the game.

Another MIT alum, Mohan Srivastava, was somewhat more sporting when he discovered a glaring flaw in a tic-tac-toe scratchcard game. The scratchcard had actually been purchased for him as a joke and turned out to be a winner, earning him $3.

Anyone else would probably have enjoyed the little win for what it was on the surface, but Srivastava was not an on-the-surface kind of guy.

The Toronto-based statistician specialized in geology and mining, specifically how to locate the spread of precious mineral deposits deep underground. He began thinking about how he could apply the same principals to scratchcards. Turns out there was an underlying pattern to how the cards were printed allowing him to predict winners in 19 cases out of 20. But, unlike the others in this story he decided not to take advantage and instead reported his findings to the lottery operators who promptly removed the game from circulation.

Would you have done the same? Let us know in the comments below!

Hacking Through Your Wallet…

Another question, if I had a way to beat the lottery would I tell you?

Who’s the say I haven’t already!

Well, honestly, I don’t claim to be as smart as Voltaire but perhaps you can find a flaw in the probability of your local lottery game and take advantage just as he did.

In order to “hack” the lottery you need five simple ingredients

  1. You need a low-odds lottery that has rolled over to a large jackpot
  2. You need the ticket price to also be quite low
  3. You need to have a syndicate of like-minded individuals ready to put up the significant cash involved in order to make it work
  4. You need to be fast, and highly coordinated to make sure the operators and vendors don’t know what’s happening before it’s too late
  5. And you still need a lot of luck!

Let’s not forget that, for all his meticulous planning for years, the one thing Stefan Klincewicz hadn’t counted on was someone else having the exact same numbers. Splitting a fraction of the jackpot with a syndicate isn’t quite like splitting the whole thing – and at some point there’s the danger that fraction will be less than the amount of the syndicate’s investment!

 

 

The post How to Hack the Lottery appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/how-to-hack-the-lottery/feed/ 1 8760
One of the all-time great inventors: Guglielmo Marconi http://www.relativelyinteresting.com/one-time-great-inventors-guglielmo-marconi/ http://www.relativelyinteresting.com/one-time-great-inventors-guglielmo-marconi/#comments Wed, 02 Nov 2016 20:28:52 +0000 http://www.relativelyinteresting.com/?p=8678 Relatively Interesting -

The inventor Guglielmo Giovanni Maria Marconi, was born into nobility in 1874 in Bologna, Northern Italy, and was the second son of wealthy land owner Giuseppe Marconi and his Irish wife Annie Jameson. Although Marconi did not do well at school, he developed an early interest in science and electricity and in the early 1890’s, aged only […]

The post One of the all-time great inventors: Guglielmo Marconi appeared first on Relatively Interesting.

]]>
Relatively Interesting -

The inventor Guglielmo Giovanni Maria Marconi, was born into nobility in 1874 in Bologna, Northern Italy, and was the second son of wealthy land owner Giuseppe Marconi and his Irish wife Annie Jameson.

Although Marconi did not do well at school, he developed an early interest in science and electricity and in the early 1890’s, aged only eighteen, began working on ‘wireless telegraphy’ (the transmission of telegraph messages without wires). This was not a new idea, but for a time it captured the young man’s attention.

A later article written by Augusto Righi, Marconi’s teacher, renewed the young man’s interest in trying to create a wireless telegraphy system based on radio waves – something no other inventor was working on.

Early developments in radio telegraphy

At the age of twenty, Marconi developed a transmitter with a monopole antenna. This early contraption was made up of a raised copper sheet, connected to a Righi spark gap, powered by an induction coil that had a telegraph key to switch it on and off to spell out Morse code text messages.

In 1894, Marconi built a storm weather alarm which consisted of a battery, a coherer (a detector that changes resistance when exposed to radio waves), and an electric bell. If there was lightning, the alarm would go off. Soon after this, Marconi made a bell ring on the other side of the room by pushing a telegraphic button on a table.

Early breakthrough

A breakthrough in radio telegraphy came in the summer of 1894, when Marconi discovered that a far greater range could be attained if the height of the antenna was increased. Now the system was capable of transmitting signals for up to 2 miles.

Marconi travelled with his mother to London in early 1896, hoping to find support for his work. When he arrived at Dover with strange wires and instruments inside his suitcases, he attracted the attention of Customs Officers. One of them contacted the Admiralty in London, and it was here that he gained the interest and support of William Preece, the Chief Electrical Engineer of the British Post Office.

Interest shown by the British

Engineers at the British Post Office approved of Marconi’s radio equipment and, on the 13th May 1897, Marconi sent the world’s first ever wireless communication over open seas. A message reading ‘Are you ready?’ travelled a distance of almost 4 miles over the British Channel from a base in Wales.

Numerous other demonstrations followed, and soon Marconi began to receive international attention. In 1899, the first demonstrations in the USA took place, and these included the reporting of the America’s Cup yacht race – transmissions took place aboard the passenger ship SS Ponce.

Transatlantic transmissions

When Marconi sailed for England on the 8th November 1899, on the SS Saint Paul, he and his team installed wireless equipment on the ship and, a few days later, the Saint Paul became the first ocean liner to report her return to England by wireless.

By the turn of the century, transatlantic transmissions were being made – a distance of 2,200 miles from Poldhu in Cornwall to St Johns in Newfoundland was recorded (although this was later disputed).

marconi-2

On 17 December 1902, a transmission from the Marconi station in Glace Bay, Nova Scotia, became the world’s first ever radio message to cross the Atlantic. Thereafter, Marconi built stations on both sides of the Atlantic to communicate with ships at sea, in competition with other inventors. In 1907, a regular transatlantic radio-telegraph service was finally begun.

Marconi set up various companies and gained a reputation for being technically conservative. Marconi’s spark-transmitter system could only be used for radiotelegraph operations when continuous-wave transmission was the future of radio communications. Sometime later, in 1915, Marconi did begin to produce some significant results with continuous-wave equipment.

In 1922, regular entertainment broadcasts were transmitted from the Marconi Research Centre at Great Baddow – these formed the beginnings of the BBC.

Marconi’s personal life

In 1905, Marconi married the Hon. Beatrice O’Brien. They had three daughters and a son but divorced in 1924. The marriage was annulled in 1927 in order that Marconi could marry Maria Cristina Bezzi-Scali, with whom he had one daughter.

Marconi proudly shared the Nobel Prize in Physics with Karl Bruan in 1909 for his contribution to radio communications. In 1914, he became a Senator in the Italian Senate and during World War I was put in charge of Italy’s military’s radio service. He later joined the Italian Fascist Party and was appointed President of the Royal Academy of Italy by the dictator Benito Mussolini.

At the age of 63, Marconi died in Rome on the 20th July 1937, after suffering a series of heart attacks, and was granted a state funeral. In the British Isles, at 6 pm on the day of the funeral, all BBC transmitters and wireless Post Office transmitters observed a two-minute silence in his honour.

 

Thanks to guest author Mike James, an independent writer working with engineering applications specialist App Eng.

 

 

The post One of the all-time great inventors: Guglielmo Marconi appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/one-time-great-inventors-guglielmo-marconi/feed/ 1 8678
Is Satoshi Nakamoto’s Identity Still a Secret? http://www.relativelyinteresting.com/satoshi-nakamotos-identity-still-secret/ http://www.relativelyinteresting.com/satoshi-nakamotos-identity-still-secret/#respond Thu, 06 Oct 2016 13:49:26 +0000 http://www.relativelyinteresting.com/?p=8604 Relatively Interesting -

There’s nothing more bizarre in a world of constant surveillance than a person with no identity. It’s the modern freak show, and has become something of a niche performance art as a consequence. Banksy, for example, has remained the archetypical anonymous artist for some time, while electro double act, Daft Punk, made $60m wearing robot […]

The post Is Satoshi Nakamoto’s Identity Still a Secret? appeared first on Relatively Interesting.

]]>
Relatively Interesting -

There’s nothing more bizarre in a world of constant surveillance than a person with no identity. It’s the modern freak show, and has become something of a niche performance art as a consequence. Banksy, for example, has remained the archetypical anonymous artist for some time, while electro double act, Daft Punk, made $60m wearing robot masks.

It’s all in good fun – or should be.

banksy

Banksy” (CC BY 2.0) by Alan Trotter

However, the recent ‘unmasking’ of Italian novelist, Elena Ferrante, says just about everything you need to know about the value of anonymity in the 21st century – it’s a priceless commodity, and its owners become prizes to be hunted down. Guardian writer, Deborah Orr, suggested that Ferrante’s ‘outer’ probably felt that he was doing the world a public service.

The Hunt for Satoshi Nakamoto

Arguably the most famous wild goose chase of recent years is the man, woman, or entity called Satoshi Nakamoto, the inventor of the cryptocurrency, Bitcoin. Nakamoto’s identity has been a secret since the turn of the current decade, shortly after he unveiled his open source, peer-to-peer currency to the world.

Satoshi is a true enigma – nobody in the Bitcoin community ever met the figure – and his anonymity has almost definitely increased the stock of Bitcoin with its advocates. For example, some iGaming websites now operate almost exclusively in the cryptocurrency, while vegascasino.io has a Bitcoin slot game called The Legend of Satoshi, in addition to Bitcoin poker, sports betting, and other games.

The inventor’s infamy in media circles reached fever pitch in 2014 when reporters in California led a 64-year-old man on a high-speed car chase through Los Angeles, after a man named Dorian Nakamoto was ‘unmasked’ as the father of Bitcoin. The report ultimately proved erroneous, however, and the real Satoshi remained hidden.

Nakamoto was ‘revealed’ once again earlier this year, when an Australian man, Craig Wright, came forward with his PR agency and a digital signature he claimed belonged to Satoshi Nakamoto. Wright was branded a fraud almost overnight. Then, true to the character he was playing, he disappeared.

bitcoin
Bitcoin, bitcoin coin, physical bitcoin,” (CC BY-SA 2.0) by antanacoins

Debating the Blockchain

So, to quickly answer the question in the title, ‘yes’, Nakamoto’s identity is still an impregnable secret – but it wouldn’t be a bad thing if the real Satoshi did step forward.

Nakamoto is a pivotal figure in the Bitcoin movement for a number of reasons. Firstly, he has a vast Bitcoin fortune of around a million units that is not currently in circulation, and could change the value of the cryptocurrency if he ever sold it; and secondly, Bitcoin is going through a period of transition, and could use his input.

The size of a block, a ‘page’ in the ledger of the blockchain, is currently up for debate, as the network Bitcoin operates on needs more space. The solution to this problem could determine whether the currency remains a limited, centralised asset (as per Nakamoto’s dream) or expands to become a larger currency, closer to the dollar or the pound.

However, the legend Nakamoto has created so far seems to indicate that he, she, they, or it won’t be making an appearance at Bitcoin HQ anytime soon.

 

The post Is Satoshi Nakamoto’s Identity Still a Secret? appeared first on Relatively Interesting.

]]>
http://www.relativelyinteresting.com/satoshi-nakamotos-identity-still-secret/feed/ 0 8604