Sign Up Now!

Sign up and get personalized intelligence briefing delivered daily.


Sign Up

Articles related to "code"


Leetcode: "Unique Morse Code Words" Fun JavaScript One Line Solution ✨

  • This is part of my series where I explain approaches to solving coding problems.
  • This is to help me articulate my thought process better, and inspire new problem solving approaches for developers!
  • International Morse Code defines a standard encoding where each letter is mapped to a series of dots and dashes, as follows: "a" maps to ".-", "b" maps to "-...", "c" maps to "-.-.", and so on.
  • Now, given a list of words, each word can be written as a concatenation of the Morse code of each letter.
  • For example, "cba" can be written as "-.-..--...", (which is the concatenation "-.-." + "-..." + ".-").
  • We'll call such a concatenation, the transformation of a word.
  • Return the number of different transformations among all words we have.
  • We'll do this to look at each letter of the inner array to create a Morse Code word.

save | comments | report | share on


TypeScript: Setting Up Our Compiler

  • In the last article, we talked about the basic workflow when writing in TypeScript.
  • In today's article, we will learn how to configure the TypeScript compiler.
  • Our configuration can be made in a file named tsconfig.json or at the command line.
  • For this article, we use tsconfig.json.
  • If we don't have a tsconfig.json, we have to use tsc [filename] to compile.
  • We can use tsc --init to get a default tsconfig.json.
  • This is awesome, because the created tsconfig.json has a lot of useful compilerOptions in it, including comments.
  • Below the compilerOptions, you can also modify files, include, exclude, compileOnSave and extends.
  • We will learn how to write some basic code.
  • Sore eyes?

save | comments | report | share on


A unified approach for downloading, extracting, processing & using datasets

  • The very first look at tensorflow_datasets would give you an impression that the package is about a collection of various datasets and though it is correct, the part that was interesting for me was that in order to support variety of datasets (audio, image, video, binary etc) they must have developed a strong foundation that provides enough abstractions to eliminate the need for mundane and repetitive tasks while keeping the user land code flexible enough to deal with the specificities of the datasets.
  • As should be clear that the usage of tiny-imagenet-tfds package does not require you to know how to download and prepare the dataset but since I am going to show you how to develop the implementation, it is important to know how this dataset is organized.

save | comments | report | share on


Fail Fast and Fail Often: Handling API Errors at Scale

  • Given a user and an api client, we poll Gitplace for the user’s pull requests, and create Monolist action items for each one we find.
  • Note that if the api call at line 3 fails, Sidekiq will automatically retry the entire job, and hopefully it will succeed the next time.
  • If we hit an ephemeral error, say a network timeout, while retrieving the comments for the 999th pull request, when the job retries we will start all over and make another 1000 api calls.
  • By abstracting away retry behavior, ensuring that jobs are idempotent, and making sure that we’re always getting closer to success, our end users are fully oblivious to the errors, and can focus on staying productive, writing code, and being the best they can be at their jobs.

save | comments | report | share on


A unified approach for downloading, extracting, processing & using datasets

  • The very first look at tensorflow_datasets would give you an impression that the package is about a collection of various datasets and though it is correct, the part that was interesting for me was that in order to support variety of datasets (audio, image, video, binary etc) they must have developed a strong foundation that provides enough abstractions to eliminate the need for mundane and repetitive tasks while keeping the user land code flexible enough to deal with the specificities of the datasets.
  • As should be clear that the usage of tiny-imagenet-tfds package does not require you to know how to download and prepare the dataset but since I am going to show you how to develop the implementation, it is important to know how this dataset is organized.

save | comments | report | share on


Welcoming Semmle to the GitHub Family

  • Today we’re announcing a big step in securing the open source supply chain: we’re welcoming Semmle to the GitHub family.
  • Semmle’s revolutionary semantic code analysis engine allows developers to write queries that identify code patterns in large codebases and search for vulnerabilities and their variants.
  • Semmle is trusted by security teams at Uber, NASA, Microsoft, Google, and has helped find thousands of vulnerabilities in some of the largest codebases in the world, as well as over 100 CVEs in open source projects to date.
  • Security researchers use Semmle to quickly find vulnerabilities in code with simple declarative queries.
  • Semmle’s community-driven approach to identifying and preventing security vulnerabilities is the very best way forward.
  • We’re so excited to be joined by the Semmle team and to welcome their world class engineers and security researchers to GitHub. Together, we’ll bring their work to all open source communities and to our customers.

save | comments | report | share on


Going Fast Slowly

  • Even though I have written the vast majority of the source code, Varnish is far from a one-person project.
  • Back before the dot-com disaster, people had actually spent considerable time and effort to find out what kind of productivity to expect from a programmer, after all, how could you ever estimate a project without knowing that crucial number?
  • With the ultimate focus on quality and correctness, for instance the Apollo and Space Shuttle software, productivity drops to less than one line of code per day per employee.
  • The estimated upper bound on Varnish productivity is almost an order of magnitude above Brooks ball-park estimate, and another easily ignorable magnitude away from the unrealistic goal of being the same quality as the software for the Space Shuttle.
  • I was 40 years old when I started Varnish and I had 22 years of professional experience, a lot of them staring at, and often fixing/improving/refactoring, other peoples source code.

save | comments | report | share on


The Crazy Job Search Process

  • If you can weed out the people with a long application process, maybe you'll get a better pool of candidates.
  • Going through three rounds of interviews gives you a better idea of how a person might perform in a job or with your team, but it will never really tell you how they'll do.
  • When you tell a person to start writing code in front of you for a problem they only get five minutes to review, that doesn't simulate the way we do real work.
  • Do your current developers even have time to review the code for all of those applicants?
  • Even if it's just ten applicants, that's still hours of time reviewing code that probably won't be useful outside of the interview.
  • Overall, the whole job process is hard and it takes a lot of patience and work to get through.

save | comments | report | share on


A unified approach for downloading, extracting, processing & using datasets

  • The very first look at tensorflow_datasets would give you an impression that the package is about a collection of various datasets and though it is correct, the part that was interesting for me was that in order to support variety of datasets (audio, image, video, binary etc) they must have developed a strong foundation that provides enough abstractions to eliminate the need for mundane and repetitive tasks while keeping the user land code flexible enough to deal with the specificities of the datasets.
  • As should be clear that the usage of tiny-imagenet-tfds package does not require you to know how to download and prepare the dataset but since I am going to show you how to develop the implementation, it is important to know how this dataset is organized.

save | comments | report | share on


Python Data Structures Tutorial

  • Tutorial on data structures in Python: Lists, Tuples, Sets and Dictionaries.
  • Also explains sequence and string functions, slicing, concatenating, iterating, sorting, etc.
  • This course combines conceptual lectures to explain how a data structure works, and code lectures that walk through how to implement a data structure in Python code.
  • All the code lectures are based on Python 3 code in a Jupyter notebook.
  • Data structures covered in this course include native Python data structures String, List, Tuple, Set, and Dictionary, as well as Stacks, Queues, Heaps, Linked Lists, Binary Search Trees, and Graphs.
  • What you’ll learn An in-depth look at native Python data structures: Strings, Lists, Tuples, Sets and Dictionaries Introduction to Queues, Stacks, Heaps, Linked Lists, Binary Search Trees and Graphs, including concepts of how they work, pros and cons, and how to Implement and use them in Python.

save | comments | report | share on