Sign Up Now!

Sign up and get personalized intelligence briefing delivered daily.


Sign Up

Articles related to "database"


Escalating your database

  • You must be aware of the tradeoffs of the different escalation options and what they entail, like the increase in the complexity of the system, issues in maintenance, making debugging harder, etc.
  • Furthermore, be aware that write actions on the database will be slower with this solution because you need to perform additional search and write operations in the index table.
  • Generally, this will not be a problem unless your IOPS operations are heavily unbalanced towards writes (especially in huge tables, with several millions of records) or/and you abuse this solution, adding several indexes in the same table.
  • Be aware that this is not always the case, and when you need to join values from different shards, especially when you also do grouping or order by more than one column, the performance can suffer heavily.

save | comments | report | share on


Escalating your database

  • You must be aware of the tradeoffs of the different escalation options and what they entail, like the increase in the complexity of the system, issues in maintenance, making debugging harder, etc.
  • Furthermore, be aware that write actions on the database will be slower with this solution because you need to perform additional search and write operations in the index table.
  • Generally, this will not be a problem unless your IOPS operations are heavily unbalanced towards writes (especially in huge tables, with several millions of records) or/and you abuse this solution, adding several indexes in the same table.
  • Be aware that this is not always the case, and when you need to join values from different shards, especially when you also do grouping or order by more than one column, the performance can suffer heavily.

save | comments | report | share on


Efficient smart contract security audits with machine learning and slither-simil

  • Trail of Bits has manually curated a wealth of data—years of security assessment reports—and now we’re exploring how to use this data to make the smart contract auditing process more efficient with Slither-simil.
  • Specifically, we explored machine learning (ML) approaches to automatically improve on the performance of Slither, our static analyzer for Solidity, and make life a bit easier for both auditors and clients.
  • Research on automatic vulnerability discovery in Solidity has taken off in the past two years, and tools like Vulcan and SmartEmbed, which use ML approaches to discovering vulnerabilities in smart contracts, are showing promising results.
  • First, we developed a baseline unsupervised model based on tokenizing source code functions and embedding them in a Euclidean space (Figure 8) to measure and quantify the distance (i.e., dissimilarity) between different tokens.

save | comments | report | share on


Escalating your database

  • You must be aware of the tradeoffs of the different escalation options and what they entail, like the increase in the complexity of the system, issues in maintenance, making debugging harder, etc.
  • Furthermore, be aware that write actions on the database will be slower with this solution because you need to perform additional search and write operations in the index table.
  • Generally, this will not be a problem unless your IOPS operations are heavily unbalanced towards writes (especially in huge tables, with several millions of records) or/and you abuse this solution, adding several indexes in the same table.
  • Be aware that this is not always the case, and when you need to join values from different shards, especially when you also do grouping or order by more than one column, the performance can suffer heavily.

save | comments | report | share on


Escalating your database

  • You must be aware of the tradeoffs of the different escalation options and what they entail, like the increase in the complexity of the system, issues in maintenance, making debugging harder, etc.
  • Furthermore, be aware that write actions on the database will be slower with this solution because you need to perform additional search and write operations in the index table.
  • Generally, this will not be a problem unless your IOPS operations are heavily unbalanced towards writes (especially in huge tables, with several millions of records) or/and you abuse this solution, adding several indexes in the same table.
  • Be aware that this is not always the case, and when you need to join values from different shards, especially when you also do grouping or order by more than one column, the performance can suffer heavily.

save | comments | report | share on


Spark vs Pandas, part 2 — Spark

  • As opposed to Pandas, Spark doesn’t support any indexing for efficient access to individual rows in a DataFrame.
  • This sort of support for complex and deeply nested schemas is something which sets Spark apart from Pandas, which can only work with purely tabular data.
  • Instead of relying on usable indices, Spark will reorganize the data under the hood as part of implementing an efficient parallel and distributed join operation.
  • In contrast to Spark, Pandas was also able to perform row-wise aggregations over all columns of a DataFrame.
  • Spark does not offer these operations, since they do not fit well to the conceptional data model where a DataFrame has a fixed set of columns and a possibly unknown or even unlimited (in the case of a streaming application which continually processes new rows as they enter the system) number of rows.

save | comments | report | share on


Spark vs Pandas, part 2 — Spark

  • As opposed to Pandas, Spark doesn’t support any indexing for efficient access to individual rows in a DataFrame.
  • This sort of support for complex and deeply nested schemas is something which sets Spark apart from Pandas, which can only work with purely tabular data.
  • Instead of relying on usable indices, Spark will reorganize the data under the hood as part of implementing an efficient parallel and distributed join operation.
  • In contrast to Spark, Pandas was also able to perform row-wise aggregations over all columns of a DataFrame.
  • Spark does not offer these operations, since they do not fit well to the conceptional data model where a DataFrame has a fixed set of columns and a possibly unknown or even unlimited (in the case of a streaming application which continually processes new rows as they enter the system) number of rows.

save | comments | report | share on


Spark vs Pandas, part 2 — Spark

  • As opposed to Pandas, Spark doesn’t support any indexing for efficient access to individual rows in a DataFrame.
  • This sort of support for complex and deeply nested schemas is something which sets Spark apart from Pandas, which can only work with purely tabular data.
  • Instead of relying on usable indices, Spark will reorganize the data under the hood as part of implementing an efficient parallel and distributed join operation.
  • In contrast to Spark, Pandas was also able to perform row-wise aggregations over all columns of a DataFrame.
  • Spark does not offer these operations, since they do not fit well to the conceptional data model where a DataFrame has a fixed set of columns and a possibly unknown or even unlimited (in the case of a streaming application which continually processes new rows as they enter the system) number of rows.

save | comments | report | share on


Spark vs Pandas, part 2 — Spark

  • As opposed to Pandas, Spark doesn’t support any indexing for efficient access to individual rows in a DataFrame.
  • This sort of support for complex and deeply nested schemas is something which sets Spark apart from Pandas, which can only work with purely tabular data.
  • Instead of relying on usable indices, Spark will reorganize the data under the hood as part of implementing an efficient parallel and distributed join operation.
  • In contrast to Spark, Pandas was also able to perform row-wise aggregations over all columns of a DataFrame.
  • Spark does not offer these operations, since they do not fit well to the conceptional data model where a DataFrame has a fixed set of columns and a possibly unknown or even unlimited (in the case of a streaming application which continually processes new rows as they enter the system) number of rows.

save | comments | report | share on


Amazon RDS on Graviton2 Processors

  • Starting today, you can also benefit from better cost-performance for your Amazon Relational Database Service (RDS) databases, compared to the previous M5 and R5 generation of database instance types, with the availability of AWS Graviton2 processors for RDS.
  • Graviton2 instances family includes several new performance optimizations such as larger L1 and L2 caches per core, higher Amazon Elastic Block Store (EBS) throughput than comparable x86 instances, fully encrypted RAM, and many others as detailed on this page.
  • Let’s Start Your First Graviton2 Based Instance To start a new RDS instance, I use the AWS Management Console or the AWS Command Line Interface (CLI), just like usual, and select one of the db.m6g or db.r6ginstance types (this page in the documentation has all the details).
  • You can provision new or migrate to Graviton2 Amazon Relational Database Service (RDS) instances in all regions where EC2 M6g and R6g are available : US East (N.

save | comments | report | share on