Breaking down Apple’s three new iPhone 12 camera systems

0 Comments

Apple has just announced its iPhone 12 lineup, and as part of today’s announcements, the company introduced a dizzying amount of camera tech scattered amongst the new iPhones. If you’re coming from an iPhone that currently has a single camera, or even a pair, you may find some serious upgrades here.

But first, let’s establish the baseline.

The iPhone SE’s camera.

One camera: iPhone SE (2020) and iPhone XR

The iPhone SE turned heads earlier this year at a $399 price, and the iPhone XR is now $499. Both have a single 12-megapixel f/1.8 “wide” camera, though they’re not quite the same — the XR has a larger sensor, while the iPhone SE appears to be a smaller one similar to 2017’s iPhone 8. Neither has ultrawide or telephoto lenses, nor fancy Deep Fusion techniques; your portrait-mode shots are also limited to what Apple’s machine learning algorithms can guess about depth, since there’s no second camera to verify.

Still, each offers a six-element lens with optical image stabilization, 4K60 video recording, and a basic portrait mode.

Two cameras: iPhone 12 and iPhone 12 mini

The iPhone 12 and iPhone 12 mini come with two cameras each, but perhaps even more importantly, that single main camera has changed. While it still offers 12 megapixels like prior phones, the lens now has 7 elements and a larger f/1.6 aperture that lets 27 percent more light hit that sensor compared to the previous gen — which should mean brighter, less noisy and/or blurry images in low light, and slightly less depth of field.

You also get:

  • A second 12-megapixel f/2.4 ultrawide camera with a 120-degree field of view, a 5-element lens, and a 13mm-equivalent focal length, which sounds like the same ultrawide we got in the iPhone 11 and iPhone 11 Pro
  • Apple’s

Immersion Announces Partnership with ANA Avatar XPRIZE to Support Competing Teams in Developing Physical Avatar Systems

0 Comments

Haptics play a central role in the competition’s challenge; Immersion to provide technology solutions and advise competing teams on the use of haptics

Immersion Corporation (NASDAQ: IMMR), the leading developer and provider of technologies for haptics, today announced its support for the ANA Avatar XPRIZE – a competition challenging teams to develop a physical Avatar System that will transport an operator’s senses, actions, and presence to a remote location in real-time.

The competition aims to realize technology that can bypass the barriers of distance and time to enable people to physically experience a remote location or provide on-the-ground assistance. As of this writing, 77 Qualified Teams are competing to develop an Avatar System with which an operator can see, hear, and interact within a remote environment in a manner that feels as if they are truly there.

“Avatars will help reimagine the human experience, giving us authentic, sensory-driven connections that will bridge our world,” said David Locke, ANA Avatar XPRIZE Director. “In order to be successful, avatars will need to have the full capabilities of human senses, in particular, touch, sight, and sound, to be able to interact with the environment as if the person is physically present. Through this partnership, Immersion can help our competing teams solve for the challenge of touching and feeling remote objects and environments.”

“The sense of touch lets you feel objects, but it also allows you to interact with them, and to sense and react to new information in the environment,” said Dave Birnbaum, Distinguished Staff, Office of the CTO at Immersion. “For a person to fully function in a location separate from their physical form, you need haptic technology. We’re pleased to be able to provide our feedback to competing teams as they develop and refine haptic avatar systems. This competition is an

New research suggests innovative method to analyse the densest star systems in the Universe

0 Comments

New research suggests innovative method to analyse the densest star systems in the Universe
Artist’s illustration of supernova remnant Credit: Pixabay

In a recently published study, a team of researchers led by the ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav) at Monash university suggests an innovative method to analyse gravitational waves from neutron star mergers, where two stars are distinguished by type (rather than mass), depending on how fast they’re spinning.


Neutron stars are extremely dense stellar objects that form when giant stars explode and die—in the explosion, their cores collapse, and the protons and electrons melt into each other to form a remnant neutron star.

In 2017, the merging of two neutron stars, called GW170817, was first observed by the LIGO and Virgo gravitational-wave detectors. This merger is well-known because scientists were also able to see light produced from it: high-energy gamma rays, visible light, and microwaves. Since then, an average of three scientific studies on GW170817 have been published every day.

In January this year, the LIGO and Virgo collaborations announced a second neutron star merger event called GW190425. Although no light was detected, this event is particularly intriguing because the two merging neutron stars are significantly heavier than GW170817, as well as previously known double neutron stars in the Milky Way.

Scientists use gravitational-wave signals—ripples in the fabric of space and time—to detect pairs of neutron stars and measure their masses. The heavier neutron star of the pair is called the ‘primary’; the lighter one is ‘secondary’.

The recycled-slow labelling scheme of a binary neutron star system

A binary neutron star system usually starts with two ordinary stars, each around ten to twenty times more massive than the Sun. When these massive stars age and run out of ‘fuel’, their lives end in supernova explosions that leave behind compact remnants, or neutron stars. Each remnant neutron star weighs around

How the architecture of new home security vision systems affects choice of memory technology

0 Comments

A camera or a computer: How the architecture of new home security vision systems affects choice of memory technology

A long-forecast surge in the number of products based on artificial intelligence (AI) and machine learning (ML) technologies is beginning to reach mainstream consumer markets.

It is true that research and development teams have found that, in some applications such as autonomous driving, the innate skill and judgement of a human is difficult, or perhaps even impossible, for a machine to learn. But while in some areas the hype around AI has run ahead of the reality, with less fanfare a number of real products based on ML capabilities are beginning to gain widespread interest from consumers. For instance, intelligent vision-based security and home monitoring systems have great potential: analyst firm Strategy Analytics forecasts growth in the home security camera market of more than 50% in the years between 2019 and 2023, from a market value of US$8 billion to US$13 billion.

The development of intelligent cameras is possible because one of the functions best suited to ML technology is image and scene recognition. Intelligence in home vision systems can be used to:
– Detect when an elderly or vulnerable person has fallen to the ground and is potentially injured
– Monitor that the breathing of a sleeping baby is normal
– Recognise the face of the resident of a home (in the case of a smart doorbell) or a pet (for instance in a smart cat flap), and automatically allow them to enter
– Detect suspicious or unrecognised activity outside the home and trigger an intruder alarm

These new intelligent vision systems for the home, based on advanced image signal processors (ISPs), are in effect function-specific computers. The latest products in this category have adopted computer-like architectures which depend for

TACTILE SYSTEMS TECHNOLOGY Investors With Losses Greater Than $100,000 …

0 Comments

Press release content from Globe Newswire. The AP news staff was not involved in its creation.

PHILADELPHIA, Oct. 12, 2020 (GLOBE NEWSWIRE) — Kehoe Law Firm, P.C. is investigating potential securities claims on behalf of investors of Tactile Systems Technology, Inc. (“Tactile” or the “Company”) ( NASDAQ: TCMD ) to determine whether Tactile engaged in securities fraud or other unlawful business practices.

Tactile investors who purchased, or otherwise acquired, the Company’s securities between May 7, 2018 and June 8, 2020, both dates inclusive (the “Class Period”), and suffered losses greater than $100,000 are encouraged to complete Kehoe Law Firm’s Securities Class Action Questionnaire or contact Michael Yarnoff, Esq., (215) 792-6676, Ext. 804, myarnoff@kehoelawfirm.com, securities@kehoelawfirm.com, to discuss the securities investigation or potential legal claims.

IF YOU WISH TO SERVE AS LEAD PLAINTIFF, YOU MUST MOVE THE COURT NO LATER THAN NOVEMBER 30, 2020. To be a member of the class action, you do not need to take any action at this time; you may retain counsel of your choice; or you can take no action and remain an absent member of the class action. No class has yet been certified in the above action. Until a class is certified, you are not represented by counsel, unless you retain an attorney. An investor’s ability to share in any potential future recovery is not dependent upon serving as lead plaintiff.

According to a class action lawsuit filed on September 29, 2020 in United States District Court, District of Minnesota, during the Class Period, the Tactile Defendants made materially false and misleading statements regarding the Company’s business, operational and compliance policies, and financial results.

According to the class action complaint, the Defendants made false and/or misleading statements and/or failed to disclose that: (1) while Tactile publicly touted a