Database

MuleSoft Embraces GraphQL to Advance API Integration

MuleSoft this week added a DataGraph capability to its Anypoint Platform for integrating applications that employ the GraphQL query language to instantly discover, access, and serve data from multiple existing APIs with a single query without writing any additional code. At the same time, MuleSoft has added additional connectors Automation Anywhere, Google Sheets, JIRA, Netsuite, and Stripe, along with an instance of MuleSoft Accelerators for...

Understanding Java Support for Persistence with JPA

Enterprise applications often deal with operations such as the collecting, processing, transforming, and reporting of a large amount of data. These data are typically stored in a database server in a particular location and retrieved on demand. The application is responsible for processing the data from the database and finally present them for client consumption. But, the intricacies involved in mitigating the data exchange...

Understanding MapReduce Types and Formats

Hadoop uses the MapReduce programming model for the data processing of input and output for the map and to reduce functions represented as key-value pairs. They are subject to parallel execution of datasets situated in a wide array of machines in a distributed architecture. The programming paradigm is essentially functional in nature in combining while using the technique of map and reduce. This article...

Introduction to Azure Serverless

The Azure Serverless Framework helps develop and deploy serverless applications via Azure Functions (serverless compute service that enables you to run code on-demand without having to provision an infrastructure). Azure Serverless solutions are divided into the following platforms: Compute Workflows and integration DevOps and Developer tools AI and machine learning Database Storage Monitoring Analytics Each of these has its own sub-categories. I will explain each one by one. Compute The following Azure Serverless features falls under...

How MapReduce Works in Hadoop

MapReduce was a model introduced by Google as a method of solving a class of Big Data problems with large clusters of inexpensive machines. Hadoop imbibes this model into the core of its working process. This article gives an introductory idea of the MapReduce model used by Hadoop in resolving the Big Data problem. Overview A typical Big Data application deals with a large set of...

Understanding the Hadoop Input Output System

Unlike any I/O subsystem, Hadoop also comes with a set of primitives. These primitive considerations, although generic in nature, go with the Hadoop IO system as well with some special connotation to it, of course. Hadoop deals with multi-terabytes of datasets; a special consideration on these primitives will give an idea how Hadoop handles data input and output. This article quickly skims over these...

Analyze Big Data with Microsoft Azure Tools

Big Data Big Data describes the large volume of data, either structured or unstructured, that inundates a business on a daily basis. Big Data treats ways to analyse, extract information from, or deal with data sets that are too large or complex to be dealt with by normal data-processing software. Big data has the following characteristics: Volume: The quantity of generated and stored data Variety: The type and...

Introduction to HDFS | What is HDFS and How Does it Work?

The core technique of storing files in storage lies in the file system that the operating environment uses. Unlike common filesystems, Hadoop uses a different filesystem that deals with large datasets across a distributed network. It is called Hadoop Distributed File System (HDFS). This article introduces the idea, with related background information to begin with. What Is a Filesystem? A filesystem typically is a method and...

An Introduction to Hadoop and Big Data

Dealing with tons of data requires some special arrangement. Common computational techniques are insufficient to handle a floodgate of data;, more so, when they are coming from multiple sources. In Big Data, the magnitude we are talking about is massive—measured in zettabytes, exabytes, or millions of petabytes or billions of terabytes. The framework called Hadoop is popularly used to handle some of the issues...

Understanding Big Data Analytics

Big Data is useful only when we can do something with it; otherwise, it's simply a pile of garbage. However, the effort required to dig is sometimes like trying to a find needle in a haystack. A meaningful pattern emerges only with a lot of analysis. Analytics put to work, tries to analyze the data with every piece of machinery available, brains included. These...

Latest Articles