Sign Up Now!

Sign up and get personalized intelligence briefing delivered daily.


Sign Up

Articles related to "image"


Samsung Galaxy S21 Ultra review: the real deal

  • The Galaxy S21 Ultra is also the first S-series phone to get support for Samsung’s S Pen stylus, though it’s sold separately and you’ll need to figure out a way to carry it (Samsung will happily sell you a bundle with a case).
  • I didn’t get the S Pen to test so I can’t speak to whether it’s any good, but I don’t have any reason to expect it would be too different from the stylus experience on the Note line of phones.
  • What is really supposed to make the Galaxy S21 Ultra “ultra” is the camera system — it’s the most important differentiator from the other Galaxy S phones and the place where Samsung wants to rack up the biggest numbers.
  • The camera system on the Galaxy S21 Ultra is the best I’ve used on any Android phone and is extremely competitive with the iPhone 12 Pro Max. And with telephoto shots, it usually wins outright.

save | comments | report | share on


Cheaper Yet Refined, Samsung’s Latest Galaxy Phones Are Great

  • (In the US anyway; there are phones with similar zoom tech, but they're not sold here.) We've reached a point in smartphone camera tech where the quality afforded at 10X is excellent most of the time, and it's something I desperately want to see trickling down into more affordable handsets.
  • The quality does start to dip when the sun sets (due to its narrower f/4.9 aperture, the Ultra's zoom camera can't absorb as much light as the other cameras), but it does an admirable job when paired with Samsung's Night mode.
  • Other stripped out features include Magnetic Secure Transmission (MST), which allowed you to use Samsung Pay at any store that takes credit cards, an actual perk over other contactless payment systems that only rely on NFC.

save | comments | report | share on


Interpreting Image Classification Model with LIME

  • Instead of looking at the global behavior (left image), LIME will go into the vicinity area of the red star point such that it becomes very local that a linear classifier could explain your model’s prediction (right image).
  • As you can see in the image above, let’s say we want LIME to explain why a data point denoted by the red star is classified in one of the class instead of the other.
  • Next, LIME will predict the class of each of the artificial data point that has been generated using our trained model.
  • Now let’s use the model to make a prediction of our input image and see how LIME could help us to understand the behavior of our model.
  • Now we also know why our pre-trained model classifies our image as a panda instead of a dog or a cat.

save | comments | report | share on


InShort: Occlusion Analysis for Explaining DNNs

  • As you have probably guessed, occlusion analysis comes with one big caveat: we have to evaluate the model for each of these perturbed inputs.
  • If your input has many dimensions, e.g. an image of 256x256 pixels, then you have to run the model 256x256=65.536 (!) times to get the complete analysis.
  • If we think about it closely, the change in output that we observe in the analysis can have another reason besides information being removed: the perturbed input is no longer in the data distribution we trained the model on.
  • While the effect of removing single pixels is usually negligible, removing whole patches is a bigger step away from the training data manifold and can thus have a bigger impact on the output.
  • Especially if your data are small or you just want something that is easy to implement and reliable (just be careful with the patch size), occlusion analysis can shine.

save | comments | report | share on


Facebook’s AI for describing photos in your feed is way smarter now

  • Thankfully, we have seen plenty of AI models in the last few years that make this task easier by automatically captioning photos.
  • Facebook, which introduced a model called Automatic Alternative Text (AAT) in 2016, has updated its model to identify objects in a photo 10 times more efficiently than before, and in greater detail.
  • But to expand its range, and cut down the training time, the team trained the new model on public images such as Instagram photos with captions and hashtags.
  • The new model also allows users to choose to get a detailed description of all photos or some specific interests, such as pictures from friends and family on the Facebook newsfeed.
  • There are plenty of enterprise solutions around that let you automatically caption your images.
  • You can learn more about Facebook’s updated image captioning model here.

save | comments | report | share on


An interactive review of the Oklab perceptual color space

  • In standard dynamic range, you basically assume the visual system is adapted to a particular set of viewing conditions (in fact, sRGB specifies an exact set of viewing conditions, including monitor brightness, white point, and room lighting).
  • It uses a model (known as the Barten model, and shown in Figure 4.6 of Poynton’s thesis) of the minimum contrast step perceptible at each brightness level, over all possible adaptation conditions.
  • The SMPTE ST 2084 transfer function is basically a mathematical curve-fit to the empirical Barten model, and has the property that with 12 bits of code words, each step is just under 0.9 of the minimum perceptual difference as predicted by the Barten model, across a range from 0.001 to 10,000 nits of brightness (7 orders of magnitude).
  • The main difference between the various color spaces in this architecture is the nonlinear function, which determines the black-to-white ramp as discussed above.

save | comments | report | share on


Facebook and Instagram's AI-generated image captions now offer far more details

  • Every picture posted to Facebook and Instagram gets a caption generated by an image analysis AI, and that AI just got a lot smarter.
  • Alt text is a field in an image’s metadata that describes its contents: “A person standing in a field with a horse,” or “a dog on a boat.” This lets the image be understood by people who can’t see it.
  • These descriptions are often added manually by a photographer or publication, but people uploading photos to social media generally don’t bother, if they even have the option.
  • The team has since cooked up many improvements to it, making it faster and more detailed, and the latest update adds an option to generate a more detailed description on demand.
  • The new detailed description feature will come to Facebook first for testing, though the improved vocabulary will appear on Instagram soon.

save | comments | report | share on


Machine Learning Models Are Missing Contracts

  • Nowadays, pretrained machine learning models are increasingly being deployed as functions and APIs. They are part of companies’ internal codebases [1], released externally for use through APIs [2], and, in research, pretrained models are published as part of the review and reproducibility processes [3].
  • But with this machine learning model, we do not have any idea of the internal implementation, and because there is no contract, we do not know which images we can trust this model with!
  • However, this is happening more and more: as machine learning models are being released as APIs for general use, or are being deployed internally but data streams are changing over time, we can no longer assume that a model’s test performance is indicative of its performance in the real world.

save | comments | report | share on


Raspberry Pi Lego Sorter

  • While Daniel was inspired by previous LEGO sorters, his creation is a huge step up from them: it can recognise absolutely every LEGO brick ever created, even bricks it has never seen before.
  • What makes Daniel’s project a ‘world first’ is that he trained his classifier using 3D model images of LEGO bricks, which is how the machine can classify absolutely any LEGO brick it’s faced with, even if it has never seen it in real life before.
  • A Raspberry Pi Camera Module captures video of each brick, which Raspberry Pi 3 Model B+ then processes and wirelessly sends to a more powerful computer able to run the neural network that classifies the parts.
  • The classification decision is then sent back to the sorting machine so it can spit the brick, using a series of servo-controlled gates, into the right output bucket.

save | comments | report | share on


Skyqraft raises $2.2M seed for its powerline issue detection system

  • Skyqraft, the Swedish startup using AI and drones for electricity powerline inspection, has raised $2.2 million in seed funding, capital it will use to further develop its technology and expand its operations in Europe and in the U.S. Leading the seed round is Subvenio Invest, with participation from pre-seed backer Antler, Next Human Ventures, and unnamed angel investors.
  • Founded in March 2019 and launched that September, Skyqraft provides what it calls “smart” infrastructure inspections for powerlines.
  • Skyqraft says the system can process high volumes of image data and is able to detect equipment issues “rapidly and with high accuracy”.
  • By using Skyqraft, the Swedish company claims utility companies can shorten a 25km powerline inspection from two days to “three minutes”.
  • Additionally, Skyqraft says it is also negotiating a series of larger scale pilots in the U.S. in 2021 with the global utility company Iberdrola.

save | comments | report | share on