MENU × BUSINESS
Banking And InsuranceCryptocurrencyDigital MarketingErpFood And BeveragesHealthcareLegalMarketing And AdvertisingMedia And EntertainmentMetals And MiningOil And GasRetailTelecom
TECHNOLOGY
Artificial IntelligenceBig DataCloudCyber SecurityE CommerceEducationGaming And VfxIT ServiceMobileNetworkingSAPScience And TechnologySecuritySoftwareStorage
PLATFORM
CiscoDatabaseGoogleIBMJuniperM2MMicrosoftOracleOracleRed Hat
LEADERSHIP
CEO ReviewCompany Review
MAGAZINE
ASIA INDIA
STARTUPS CLIENT SPEAK CONTACT US

The Silicon Review Asia

“Apache Arrow” is the new open-source project for Big data

“Apache Arrow” is the new open-source project for Big data

Hadoop, Spark and Kafka have already had a defining influence on the world of big data, and now there’s yet another Apache project with the potential to shape the landscape even further: Apache Arrow. Apache Software Foundation recently launched Arrow as a top-level project designed to provide a high-performance data layer for columnar in-memory analytics across disparate systems. Based on code from the related Apache Drill project, Apache Arrow can bring benefits including performance improvements of more than 100x on analytical workloads, the foundation said. In general, it enables multi-system workloads by eliminating cross-system communication overhead.

Code committers to the project include developers from other Apache big-data projects such as Calcite, Cassandra, Drill, Hadoop, HBase, Impala, Kudu, Parquet, Phoenix, Spark and Storm. “The open-source community has joined forces on Apache Arrow,” said Jacques Nadeau, vice president of the new project as well as Apache Drill. “We anticipate the majority of the world’s data will be processed through Arrow within the next few years.” In many workloads, between 70 percent and 80 percent of CPU cycles are spent serializing and deserializing data. Arrow alleviates that burden by enabling data to be shared among systems and processed with no serialization, deserialization or memory copies, the foundation said.

“An industry-standard columnar in-memory data layer enables users to combine multiple systems, applications and programming languages in a single workload without the usual overhead,” said Ted Dunning, vice president of the Apache Incubator and member of the Apache Arrow Project Management Committee. Arrow also supports complex data with dynamic schemas in addition to traditional relational data. For instance, it can handle JSON data, which is commonly used in Internet-of-Things (IoT) workloads, modern applications and log files. Implementations are also available for a number of programming languages for greater interoperability.

Apache Arrow software is available under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project.

YOU MAY ALSO LIKE

Marketing Partnership: NBA, McDonald’s Will Jointly open NBA Experience Zones India

There is good news for all basketball fans in India. Very soon, NBA themed promotions and experience zones will be launched in India. The announcement...

Dunzo to Raise Funds from Google, others

Dunzo, the app which is trying to make life easier for the people upto some extent plans to raise about Rs. 183 crore from different investors which a...

Sapta Shakti Command Will Be Displaying Defence Technology Equipment at Jaipur Military Station

This event is organised with an aim to provide an opportunity where one and all can experience the developments in the field of defence and security-r...

Samsung is All Set to Roll out Its Galaxy A9 Smartphone On November 20

After Samsung unveiled its new Galaxy A9 smartphone in Singapore last month, the company is all set to launch it in India on November 20. In Singapore...

RECOMMENDED