Sign Up Now!

Sign up and get personalized intelligence briefing delivered daily.


Sign Up

Articles related to "human"


XRL: eXplainable Reinforcement Learning

  • The interpretability of the framework comes from the fact that each task (for instance stack cobblestone block) is described by human instruction, and the trained agents can only access learnt skills through these human descriptions, making the agent’s policies and decisions human-interpretable.
  • The resulting framework exhibited a higher learning efficiency, was able to generalize well in new environments and was inherently interpretable as it needs weak human supervision to give the agent instructions in order to learn new skills.
  • In Explainable Reinforcement Learning through a Causal Lens, action influence models are incorporated for Markov Decision Processes (MDP) based RL agents, extending structural causal models (SCMs) with the addition of actions.
  • Explanation generation requires the following steps: 1) defining the action influence model; 2) learning the structural equations during reinforcement learning; and finally, 3) generating explanans for explanandum.

save | comments | report | share on


A taste of ACL2020: 6 new Datasets & Benchmarks

  • While datasets for Machine Learning used to last — i.e. MNIST didn’t reach human performance until more than a decade after it was introduced — the latest benchmarks for Natural Language understanding are becoming obsolete faster than we expected, highlighting the importance of finding better ones.
  • Early baseline tests with a BERT model indicate that there’s a lot of room for improvement and that current state-of-the-art NLU models fail to understand emotion to this degree, making it a challenging new sentiment benchmark to focus on.
  • After many filtering tricks, heuristics and manual validation, the resulting ‘gold’ dataset has 2.7k synsets (synonym sets) and 15k matched images, and an extended ‘silver’ set with 10k synsets, generated by a vision-language model by using the natural language definitions in WordNet. Similarly as Adversarial NLI pointed out, many reading comprehension tasks rely on annotation artifacts and other biases from existing datasets, enabling the completion of tasks without the need of any understanding.

save | comments | report | share on


XRL: eXplainable Reinforcement Learning

  • The interpretability of the framework comes from the fact that each task (for instance stack cobblestone block) is described by human instruction, and the trained agents can only access learnt skills through these human descriptions, making the agent’s policies and decisions human-interpretable.
  • The resulting framework exhibited a higher learning efficiency, was able to generalize well in new environments and was inherently interpretable as it needs weak human supervision to give the agent instructions in order to learn new skills.
  • In Explainable Reinforcement Learning through a Causal Lens, action influence models are incorporated for Markov Decision Processes (MDP) based RL agents, extending structural causal models (SCMs) with the addition of actions.
  • Explanation generation requires the following steps: 1) defining the action influence model; 2) learning the structural equations during reinforcement learning; and finally, 3) generating explanans for explanandum.

save | comments | report | share on


A taste of ACL2020: 6 new Datasets & Benchmarks

  • While datasets for Machine Learning used to last — i.e. MNIST didn’t reach human performance until more than a decade after it was introduced — the latest benchmarks for Natural Language understanding are becoming obsolete faster than we expected, highlighting the importance of finding better ones.
  • Early baseline tests with a BERT model indicate that there’s a lot of room for improvement and that current state-of-the-art NLU models fail to understand emotion to this degree, making it a challenging new sentiment benchmark to focus on.
  • After many filtering tricks, heuristics and manual validation, the resulting ‘gold’ dataset has 2.7k synsets (synonym sets) and 15k matched images, and an extended ‘silver’ set with 10k synsets, generated by a vision-language model by using the natural language definitions in WordNet. Similarly as Adversarial NLI pointed out, many reading comprehension tasks rely on annotation artifacts and other biases from existing datasets, enabling the completion of tasks without the need of any understanding.

save | comments | report | share on


Medium writers you should follow as an aspiring Data Scientist

  • I love Will’s articles and his ability to explain Data Science concepts in a simple and understandable way.
  • He wrote a lot of articles that are targeting beginner Data Scientists but also some that explain more advanced topics.
  • Additionally, he has been a Medium writer for quite a while and during this time and he has posted a lot of articles.
  • Tony writes regularly quality articles on different Data Science topics.
  • Another reason why I love his blog is that he seems to be a real human, he not only writes about Data Science but also emphasizes the importance of work-life balance and daily activity.
  • For me, his website is a place that I have found a lot of Machine Learning and Data Science tutorials that are simply well written, very informative.

save | comments | report | share on


XRL: eXplainable Reinforcement Learning

  • The interpretability of the framework comes from the fact that each task (for instance stack cobblestone block) is described by human instruction, and the trained agents can only access learnt skills through these human descriptions, making the agent’s policies and decisions human-interpretable.
  • The resulting framework exhibited a higher learning efficiency, was able to generalize well in new environments and was inherently interpretable as it needs weak human supervision to give the agent instructions in order to learn new skills.
  • In Explainable Reinforcement Learning through a Causal Lens, action influence models are incorporated for Markov Decision Processes (MDP) based RL agents, extending structural causal models (SCMs) with the addition of actions.
  • Explanation generation requires the following steps: 1) defining the action influence model; 2) learning the structural equations during reinforcement learning; and finally, 3) generating explanans for explanandum.

save | comments | report | share on


Medium writers you should follow as an aspiring Data Scientist

  • I love Will’s articles and his ability to explain Data Science concepts in a simple and understandable way.
  • He wrote a lot of articles that are targeting beginner Data Scientists but also some that explain more advanced topics.
  • Additionally, he has been a Medium writer for quite a while and during this time and he has posted a lot of articles.
  • Tony writes regularly quality articles on different Data Science topics.
  • Another reason why I love his blog is that he seems to be a real human, he not only writes about Data Science but also emphasizes the importance of work-life balance and daily activity.
  • For me, his website is a place that I have found a lot of Machine Learning and Data Science tutorials that are simply well written, very informative.

save | comments | report | share on


A practical case on why we need the humanities

  • Which means the taxes that pay to fund the public universities that make up the great bulk of the study of the humanities are going to mostly come from people who have not, or could not, avail themselves of a humanistic education.
  • Even if we made the humanities available to all – a goal I robustly support (it is one reason I am spending all this time working on this open, free web platform, after all) – that effort would likely have to be publicly funded through a great many tax-payers who did not care to consume much of the academic products of the humanities (even if they consume many of its pop-cultural byproducts without knowing it).

save | comments | report | share on


Greatest Mistake of Our Time

  • Do you think the people at that time were aware that things would become so ugly by the end of the war?
  • Also, if we think that we are not making a mistake today collectively then that leaves a blind spot and accidents usually happen in blind spot.
  • I think that collective mistake of our time is social media.
  • I have thought about this for long time and after lot of observation and experiences both online and in real life I am saying that the greatest mistake of our time is social media.
  • Social media is the greatest experiment of our time and it’s being run on billions of people, the effects of which are not really comprehensible to us.
  • It’s so deeply woven in our societies now that the rise and fall of political parties and world leaders is also influenced by how successful there campaign is on social platforms.

save | comments | report | share on


When data is messy

  • It turns out that most of the tench pictures the neural net had seen were of people holding the fish as a trophy.
  • It’s figured out about dramatic stage lighting and human forms, but many of its images don’t contain anything that remotely resembles a microphone.
  • This week Vinay Prabhu and Abeba Birhane pointed out major problems with another dataset, 80 Million Tiny Images, which scraped images and automatically assigned tags to them with the help of another neural net trained on internet text.
  • This is not just a problem with bad data, but with a system where major research groups can release datasets with such huge issues with offensive language and lack of consent.
  • Like the algorithm that upscaled Obama into a white man, ImageNet is the product of a machine learning community where there’s a huge lack of diversity.

save | comments | report | share on