pexels-photo-59523

What Happens When You Leave Your Dog at Home Alone

…Most dog lovers have no idea how their animals cope once they have been left alone.

Now scientists have revealed how dogs spend up to half an hour howling, barking and whining after their owner leaves them alone . . .

The first few minutes of isolation are the most stressful for a dog, according to the researchers.

Alice Potter, a pet scientist at the RSPCA, told MailOnline: ‘The separation reaction is displayed soon after the departure of the owner, normally commencing within 30 minutes, and often within the first few minutes.’

‘The most common behavioural signs of seperation-related behaviour are destructive behaviour often targeted at the door the owner leaves through, various types of vocalisations (howling, barking and whining), defecating and urinating. (Read more from “What Happens When You Leave Your Dog at Home Alone” HERE)

Follow Joe Miller on Twitter HERE and Facebook HERE.

Brain_power

Wearable Technology Aims to Predict Relationships, Intervene When Computer Detects Trouble

Predictive behavior technology is all the rage in everything from advertising to policing to medicine, and is something that we have covered extensively at Activist Post (see our archives here). Technocrats everywhere believe that the supreme being of the universe should be a computer algorithm; because, after all, in its perfection it knows us better than we know ourselves.

The following research from the University of Southern California is a chilling example of how the State could easily employ this technology for literal interventions where potential violence could occur. Beyond the micromanagement of adult relationships, note the final direction at the end of the article: parent-child relationships.

Are we really this lazy to turn over our most intimate interactions to the advice of a computer and hope that it can help manage our every emotion? Are we really that eager to completely eradicate human free will?

Mobile sensing system developed by joint USC Dornsife and USC Viterbi team could give couples the power to anticipate each other’s emotional states and adapt behavior

Your partner comes in and slams a door. What was that about? Something you did? What if you knew to anticipate it because you were notified in advance from an automated text message that he/she didn’t have a great day at work? Might that change the dynamic of your interactions?

You had a bad day. The last thing you need is to get into an argument when you get home because your partner also had a bad day. What if technology could automatically send you a notification advising you to do a short meditation module to restore your mental state? How might this affect the quality of your interactions with your partner?

In the near future, researchers from the USC Viterbi School of Engineering and the USC Dornsife College of Arts, Letters and Sciences believe technology might be employed to help de-escalate any potential conflicts among couples. In a collaboration between the Signal Analysis and Interpretation Laboratory (SAIL) in the Ming Hsieh Department of Electrical Engineering and the Family Studies Project in the Psychology Department at USC Dornsife, researchers employed multi-modal ambulatory measures to develop a system in order to detect if conflict had occurred between a couple—a sort of seismometer of the shakes, rattles and rolls in a relationship.

The research, documented in “Using Multimodal Wearable Technology to Detect Conflict among Couples,” by Adela C. Timmons, Theodora Chaspari, Sohyun C. Han, Laura Perrone, Shrikanth S. Narayanan, and Gayla Margolin, is published by the IEEE Computer Society this month.

In order to detect intra-couple conflict, the researchers with support from the National Science Foundation, developed algorithms to assess whether conflict was present among couples. This algorithm pulled together data from various sources including wearables, mobile phones, and physiological signals (or bio-signals) to assess couples’ emotional states. Data collected included body temperature, heart activity, sweat, audio recordings, assessment of language content and vocal intensity. The algorithm analyzing this data has proved to be up to 86 percent accurate in its ability to detect conflict episodes (based on participants’ hourly self-reports of when conflict occurred). The authors of the study believe it is the first instance in which passive modal computing is being collected and employed to detect conflict behavior in daily life.

Theodora Chaspari, an Electrical Engineering Ph.D student in Shri Naryanan’s SAIL lab at USC Viterbi, speaks of why this particular collaboration appealed to her and the SAIL group: “We could help beyond pure engineering domains, providing a more quantitative measures of human behavior.”

Lead author Adela C. Timmons, a psychology Ph.D student in Gayla Margolin’s Family Studies Project team at USC Dornsife, together with Chaspari runs the USC Couple Mobile Sensing Project with “the eventual goal of developing interventions to improve couple functioning.” In addition to the notion of helping couples who can’t often replicate the interventions and behavioral strategies they learn and practice in therapist’s office, Timmons spoke about the importance of this research in detecting and perhaps having couples minimize conflict in their relationships. She indicates that negative relationships (or the absence of positive relationships) have long been recognized as a health risk. The quality of relationships, Timmons said, can provide health benefits. Further, she indicates that research has shown that those with healthy relationships have less stress and that chronic stress is known to cause “wear and tear” on the body.

The authors say that the next step in the research is using such unobtrusive, passive technologies to anticipate conflict —perhaps five minutes before it might occur, by letting computer software determine the likelihood that conflict will occur. The other part of anticipating conflict is developing early interventions—possible real-time interventions or behavioral prompts such as text notifications of a partner’s psychological state or to guide an individual to meditate before bringing that conflict home.

Chaspari acknowledges that this is not a one-fits-all approach. Machine learning software can learn what is most useful in an individual. For example, for any given person, certain factors might have more weight in predicting conflict.

Once this system has been proven, the authors anticipate that it can be employed to other important relationships such as a parent-child dynamic. (For more from the author of “Wearable Technology Aims to Predict Relationships, Intervene When Computer Detects Trouble” please click HERE)

Follow Joe Miller on Twitter HERE and Facebook HERE.

brain-951874_960_720

Riots, Fear, Anger Predicted by AI: Simulation ‘Detects Your Emotions Under Stress’

It is true that how you react to a situation will determine whether or not you achieve a successful outcome.

During an emergency, the strength and resolve of your mindset, and of your psychological state, will ultimately be the driving factor in whether you become a survivor or a victim. Your supplies and plans will be secondary to that. During a large-scale breakdown, keeping calm under stress and resisting crowd forces to join into mayhem will be among the most valued traits.

If the animal overtakes rational man, the battle is lost. Society will immediately devolve into the worst behaviors, and people will get hurt before order is restored.

Now, a new generation of artificial intelligence is using a facial recognition feed of users’ features to detect emotional states during a major crisis scenario (while immersed in a VR simulation). While the software hasn’t evolved to a stable level yet, it is determining the threshold for which violence, chaos and absolute unrest sets in. Experts believe that AI now understands when a person is upset enough to resort to rioting, and may be directing people to keep their calm instead.

via the London Guardian:

An immersive film project is attempting to understand how people react in stressful situations by using artificial intelligence (AI), film and gaming technologies to place participants inside a simulated riot and then detecting their emotions in real time.

[…]Riot was inspired by global unrest, and was specifically inspired by [immersive filmmaker Karen] Palmer’s experience of watching live footage of the Ferguson protests in 2015. “I felt a big sense of frustration, anger and helplessness. I needed to create a piece of work that would encourage dialogue around these types of social issues. Riots all over the world now seem to be [the] last form of [community] expression,” she said.

[…]Designed as an immersive social digital experience, the objective is to get through a simulated riot alive. This is achieved through interacting with a variety of characters who can help you reach home. The video narrative is controlled by the emotional state of the user, which is monitored through AI software in real time.

[…] We see looters, anarchists and police playing their parts and “interacting” directly with us. What happens next is up to us: our reactions and responses determine the story, and as the screen is not enclosed in a headset, but open for others to see, it also creates a public narrative.

Currently,Riot’s pilot interface can recognise three emotional states: fear, anger and calm.

Is there ultimately something we can learn about ourselves in simulations of how we would react under stress, and to the harsh environments of an unstable society during which some actors will resort to ugly extremes?

The emotional state is directly influenced by physiological reactions by the body under stress – and the ultimatum fight or flight response kicks in. As Lizzie Bennett wrote:

The Physiological Basics

Pupils dilate to take in as much light as possible
Blood-glucose levels increase
Veins in the skin contract allowing extra blood flow to the muscles
Smooth muscle relaxes to allow extra oxygen for the lungs
Heart rate increases
Blood pressure increases
Non-essential systems shut down (digestion for example)
The only focus is the task in hand

Such information may be useful to training; and, indeed, a computer-fed feedback loop could help an individual doing repeated simulations to learn to keep calm, maintaining such factors as blood pressure and heart rate under thresholds for calm and sober reactions.

Awareness of these factors may increase your control over them, to aid in your advantage.

The other side of the coin is predictive behavior, real-time surveillance analysis of the population, and the programming of artificial stimuli to make people react. Now they know what makes them tick – and they know what type of event, real or manufactured, might make them react.

“We have been doing research in emotion detection from facial expression, voice, body gesture, EEG, etc for many years,” said Meng. He hopes the project’s success will make people see the benefits of AI, leading to the development of smart homes, building and cities.

The only question is whether they want to provoke a crowd into riots (maybe to demonize the opposition) or whether they want to calm and pacify the crowds (maybe to keep people from caring about the effects of oppressive laws or bad policy).

The near-future now promises to regulate our lives this way – this technology will be used to direct people in real-life living environments, with inputs from smart devices nudging your behavior, socially engineering your meaningful activities, and perhaps your thinking.

How will you prepare for a future of tracking and behavior monitoring? Will it alter your preps, change your thinking or lead you to alternate strategies? (For more from the author of “Riots, Fear, Anger Predicted by AI: Simulation ‘Detects Your Emotions Under Stress'” please click HERE)

Follow Joe Miller on Twitter HERE and Facebook HERE.

Brain_power

Elon Musk Launches Neuralink, a Venture to Merge the Human Brain With AI

SpaceX and Tesla CEO Elon Musk is backing a brain-computer interface venture called Neuralink, according to The Wall Street Journal. The company, which is still in the earliest stages of existence and has no public presence whatsoever, is centered on creating devices that can be implanted in the human brain, with the eventual purpose of helping human beings merge with software and keep pace with advancements in artificial intelligence. These enhancements could improve memory or allow for more direct interfacing with computing devices.

Musk has hinted at the existence of Neuralink a few times over the last six months or so. More recently, Musk told a crowd in Dubai, “Over time I think we will probably see a closer merger of biological intelligence and digital intelligence.” He added that “it’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.” On Twitter, Musk has responded to inquiring fans about his progress on a so-called “neural lace,” which is sci-fi shorthand for a brain-computer interface humans could use to improve themselves.

These types of brain-computer interfaces exist today only in science fiction. In the medical realm, electrode arrays and other implants have been used to help ameliorate the effects of Parkinson’s, epilepsy, and other neurodegenerative diseases. However, very few people on the planet have complex implants placed inside their skulls, while the number of patients with very basic stimulating devices number only in the tens of thousands. This is partly because it is incredibly dangerous and invasive to operate on the human brain, and only those who have exhausted every other medical option choose to undergo such surgery as a last resort. (Read more from “Elon Musk Launches Neuralink, a Venture to Merge the Human Brain With AI” HERE)

Follow Joe Miller on Twitter HERE and Facebook HERE.

coca-cola-547082_960_720

Something Disgusting Was Just Found in These Coca Cola Cans

Police have launched an investigation after what appears to be human waste was found in a shipment of drinks cans at a Co Antrim factory.

Multinational Coca-Cola said it was probing the matter with the PSNI. It added products currently on sale were not affected.

The night shift at Lisburn’s Coca-Cola plant was disrupted last week when a container of cans thought to have arrived from Germany clogged up the machines – only for workers to discover a number were filled with what looked like human waste.

“It was absolutely horrible, and the machines had to be turned off for about 15 hours to be cleaned,” a source said. “It was unusual because normally the cans come from somewhere else in the UK, but this time they apparently came from Germany.

“The rumour is that some poor immigrants could have made that long journey in the lorry and that in their desperation were forced to use the cans instead of a toilet. (Read more from “Something Disgusting Was Just Found in These Coca Cola Cans” HERE)

Follow Joe Miller on Twitter HERE and Facebook HERE.

bigfoot-542546_960_720

Bigfoot Blamed in Idaho Car Crash

A northern Idaho woman told police she crashed into a deer because she was distracted by a sasquatch in her rearview mirror.

The Moscow-Pullman Daily News reports that the 50-year-old Tensed woman was driving south on U.S. Highway 95 on Wednesday when she struck a deer near Potlatch.

The woman told Benewah County Sheriff’s officials that she saw a sasquatch chasing a deer on the side of the road while driving. She says she checked one of her mirrors to get a second look at the beast and when she looked up, the deer ran in front of her. (Read more from “Bigfoot Blamed in Idaho Car Crash” HERE)

Follow Joe Miller on Twitter HERE and Facebook HERE.

brain-951874_960_720 (1)

In an Unexplained Case, Brain Activity Has Been Recorded as Much as 10 Minutes After Death

Doctors in a Canadian intensive care unit have stumbled on a very strange case – when life support was turned off for four terminal patients, one of them showed persistent brain activity even after they were declared clinically dead.

For more than 10 minutes after doctors confirmed death through a range of observations, including the absence of a pulse and unreactive pupils, the patient appeared to experience the same kind of brain waves (delta wave bursts) we get during deep sleep. And it’s an entirely different phenomenon to the sudden ‘death wave’ that’s been observed in rats following decapitation.

“In one patient, single delta wave bursts persisted following the cessation of both the cardiac rhythm and arterial blood pressure (ABP),” the team from the University of Western Ontario in Canada reports.

They also found that death could be a unique experience for each individual, noting that across the four patients, the frontal electroencephalographic (EEG) recordings of their brain activity displayed few similarities both before and after they were declared dead.

“There was a significant difference in EEG amplitude between the 30-minute period before and the 5-minute period following ABP cessation for the group,” the researchers explain. (Read more from “In an Unexplained Case, Brain Activity Has Been Recorded as Much as 10 Minutes After Death” HERE)

Follow Joe Miller on Twitter HERE and Facebook HERE.

man-320276_960_720

Rogue Factory Robot Blamed for Death of Human Colleague

A rogue robot has been blamed for the death of a woman killed in an accident at an auto-parts factory in Michigan.

Wanda Holbrook, who worked as a maintenance technician at the Ventra Ionia Mains plant for 12 years, was “trapped by robotic machinery and pronounced dead at the scene” in July 2015.

The 57-year-old’s widower, William Holbrook, has filed a wrongful death complaint seeking damages from five robotics companies responsible for manufacturing, installing and testing the robotics: Lincoln Electric, Flex-N-Gate, Prodomax, FANUC and Nachi.

“Wanda was working in either section 140 or 150 within the ‘100’ cell, when a robot from section 130 took Wanda by surprise, entering the section she was working,” the lawsuit alleges . . .

“A failure of one or more of defendants’ safety systems or devices had taken place, causing Wanda’s death.” (Read more from “Rogue Factory Robot Blamed for Death of Human Colleague” HERE)

Follow Joe Miller on Twitter HERE and Facebook HERE.

8641209740_22cb90e304

Revolutionary 3D-Printed House Takes Less Than a Day to Build and Only Costs $10,000

A groundbreaking potential solution to homelessness and poverty is now a reality thanks to a company called Apis Cor. The company, based in Russia and San Francisco, has developed the capability to 3D-print an entire house in just 24-hours.

As the Telegraph reports, Nikita Chen-yun-tai, the inventor of the mobile printer and founder of Apis Cor, explained his desire is “to automate everything.”

“When I first thought about creating my machine the world has already knew about the construction 3D printing,” he explained, “But all printers created before shared one thing in common – they were portal type. I am sure that such a design doesn’t have a future due to its bulkiness. So I took care of this limitation and decided to upgrade a construction crane design.”

What sets Apis Cor’s product apart from the rest is that its mobile printing technology can print everything right on site. Prior to this method, portions of the house had to be made off-site and then transported. However, thanks to Apis Cor, that costly process is now a thing of the past.

“Printing of self-bearing walls, partitions and building envelope were done in less than a day: pure machine time of printing amounted to 24 hours,” the company said.

Once the printer finishes the house, it is removed with a crane and the roof is then added, followed by interior fixtures, fittings, and paint.

As ZeroHedge points out, the initial house consists of a hallway, bathroom, living room and kitchen and is located in one of Apis Cor’s facilities in Russia. The company has claimed that the house can last up to 175 years.

For now the technology is in its infancy, however in a few years, the deflationary pressures unleashed by Apis-Cor and its competitors could results in a huge deflationary wave across the construction space, and would mean that a house that recently cost in the hundreds of thousands, or millions, could be built for a fraction of the cost, providing cheap, accessible housing to millions, perhaps in the process revolutionizing and upending the multi trillion-dollar mortgage business that is the bedrock of the US banking industry.

This incredibly cheap and efficient home only costs $10,134.

Below is a brief video of this amazing process. The 400-square-foot home is breathing new life into the industry of 3D printing. Imagine the capabilities this technology when applied to poverty-stricken areas throughout the globe. The implications are nothing short of revolutionary.

(For more from the author of “Revolutionary 3D-Printed House Takes Less Than a Day to Build and Only Costs $10,000” please click HERE)

Follow Joe Miller on Twitter HERE and Facebook HERE.

dna-163710_960_720 (1)

Artificial Human Life Could Soon Be Grown in Lab After Embryo Breakthrough

Artificial human life could soon be grown from scratch in the lab, after scientists successfully created a mammal embryo using only stem cells.

Cambridge University mixed two kinds of mouse stem cells and placed them on a 3D scaffold. After four days of growth in a tank of chemicals designed to mimic conditions inside the womb, the cells formed the structure of a living mouse embryo.

The breakthrough has been described as a ‘masterpiece’ in bioengineering, which could eventually allow scientists to grow artificial human embryos in the lab without the need for a sperm or an egg.

Growing embryos would help researchers to study the very early stages of human life so they could understand why so many pregnancies fail, but is likely to prove controversial and raise ethical questions about what constitutes human life.

Currently scientists can carry out experiments on leftover embryos from IVF treatments, but they are in short supply and must be destroyed after 14 days. Scientists say that being able to create unlimited numbers of artificial embryos in the lab could speed up research while potentially removing some of the ethical boundaries. (Read more from “Artificial Human Life Could Soon Be Grown in Lab After Embryo Breakthrough” HERE)

Follow Joe Miller on Twitter HERE and Facebook HERE.