Five decades ago Intel’s co-founder, Gordon E. Moore, predicted a potential growth in the number of transistors on integrated circuits. His prediction was for a ten year period however the exponential growth he predicted has persisted ever since. It is not hard to notice the sometimes overwhelming, fast pace of technological advancement, which isn’t bad per se, but we are underestimating the impact these changes have on other aspects of our lives.
There are lots of discussions on the impact of technology on our society and the way we live. Everyone has an opinion but even teenagers are beginning to miss “The good old days”.
Things have been changing all the time, no one has managed to stop change, although some try to hide from it. Things are changing so fast that we now even have generation gaps between siblings, where it used to be between parents and children.
Even the simple task of communication is becoming challenging: someone over 60 prefers being called on a landline, friends around 50 are good with a call or a FB message, those under 30 feel insulted if you try a voice call but are okay with messaging, and to get in touch with a teen you have to check and see which platform is ‘in’ before you try.
To tell the truth, I have no idea what to use to say happy birthday to my twelve year old niece!
News, Opinions and Censorship
We are drowning in opinions, alternative facts, extremist views and it is practically too late to do anything. The damage is already done. When it came to regulating the flow of content on the internet, lawmakers most probably couldn’t have anticipated the scope and their advisors most probably had their own interests in mind.
Back in 1996, section 230 of the Telecommunications Act might have helped the companies that we wrongly label as “social” media flourish by not making them responsible for the content they curate and control, and no one knows how to undo it. When it comes to liability, so many concessions and exceptions were made for the internet startups that now a few trolls can topple governments.
I absolutely support freedom of speech and the possibilities the internet gives oppressed voices, but I also believe the lack of accountability on the part of the publishing platforms needs to be addressed with urgency. They need to be held responsible for what they produce just like any other for-profit.
The first thing that comes to mind when thinking of technology and money, is bitcoin. There are lots of discussions on whether it is actually worth anything or if it is a scam.
Technically, anything deemed of value can be used as currency: some exchange millions of dollars for a canvas with paint on it depending on who signed it, others will pay over a million dollars for an old dime.
Giving cryptocurrency value is not the problem but potential its impact on the financial system creates an enormous dilemma.
A lot of people criticize the fact that cryptocurrencies can be transferred anonymously: this can be fixed with proper regulation. The main problem will be managing debt with tools like fractional reserve banking or general monetary policy. A lot of people do not realize that the money we use is not backed by anything! Banks and governments create and destroy money every day – this would not be possible with cryptocurrencies like bitcoin.
Again I doubt that the trend can be stopped but instead of seriously looking for solutions those responsible are either trying to demonize it or stick their head in the sand and hope it goes away.
There is a lot of talk about algorithms being liable for damages they might cause, but mostly between university professors. Lawmakers seem to laugh it off.
Insurances are already grappling with the consequences of new technologies. Everything is happening so fast that there is not enough data to calculate the risks or benefits. By the time you have enough data there will be something new to evaluate.
When it comes to auto insurance how low can the insurance rates go and how can we determine when the driver is liable and when the technology.
The health industry is relying more and more on AI diagnosis. Who is responsible for a failed diagnosis, can you sue an algorithm for malpractice?
Who is to be held responsible if a chatbot ruins your reputation on Twitter with false claims? Is it the platform, the developer, the person who owns the account or just tough luck.
Not only has the question of liability in these and similar instances not been answered, in many cases it hasn’t even been asked.
I count myself as a geek, I love technology and embrace artificial intelligence, I support open source software and decentralisation, but I see a lot of danger in letting geeks rule the world unchecked.