AMRITA BIGDATA FRAMEWORK

Why ABDF?


  • ABDF offers several processing modes under one Framework encapsulating the complexity of the same from the user. Linear Execution mode, Hadoop, In-memory using Spark, Streaming with Spark & Storm, Spark over HDFS, GPGPU based Algorithms
  • Should the users be the sole decision makers in choosing the most suitable processing modes?
  • What if one would want to try different procession options to check out the best performing mode for their data set?
  • Are you looking for a Big Data Menu that can serve you a variety of Analytic Flavors?
  • Then ABDF is the way to go. ABDF, The Mahout to Tame the Big Data V Factors
  • ABDF, An All in One, Well Integrated Intelligent Analytic Framework
  • ABDF, The One Stop Shop for your Analytic needs
  • Easy to Develop BI and Data Mining Pipelines through a built in GUI based Process Flow Mapper
  • Users can either switch ABDF onto an Auto Pilot mode or can choose the best among the different processing modes to analyze the data streams
  • An integrated Visualization Framework which can be used along with Process Flow Mapper to visualize the data

    The whole idea of building ABDF was to make Data Mining more accessible to those spectrum of users outside the range of Data scientists. ABDF is intended to narrow the gap between regular BI and Data Mining world. Besides all a novice user will struggle to build a mining processing pipeline incorporating all the required elements to ensure that the output is reliable. ABDF have built in Algorithm Templates that can help build any mining process flow easily. One can then reshape the template to fine tune the results. A well integrated visualization engine will make life a lot easier for its users in visualizing the result sets.

    ABDF..... A Processing Supplements to the Data Mining community


    Data scientists/researches uses data sets with varying data density and frequency. Data so generated can be trapped from multitude of sources and transmitted in different forms/formats. Data generated by such sources can also be continuous or intermittent. It can be structured or Unstructured. Applying Linear algorithms and/or processes against large data volume will result in degraded performance as it will take a lot of time to process the same or may fail due to lack of adequate memory demanded by complex mathematical computations involved. Hence it should be handled by distributed processing frameworks like Hadoop. What happens if one try to run a small data set of the size of 100KB in Hadoop. It will take a comparatively more time to execute the same than if run against the linear execution mode. If the environment demands for fast response, then in-memory analytics could be the way ahead. Neither linear nor Hadoop mode can handle streaming data. Hence the need for SPARK and STORM. SPARK can also process data real time and in-memory which makes processing even faster. What if, the processing need to be done on GPGPU's for faster linear execution? So such scenarios can grow in numbers.......

    ABDF Analytics Supplement


    ABDF provides a large pool of Algorithms which we refer to as "Bare Bone Algorithms". This is because ABDF will strip all algorithms off their customizable parameters, pre & post processors. Each of these can be customized with ease to suite the best need. For e.g. using decision tree algorithms. ABDF provides three variations of the same. ID3, C4.5 & Random forest. User can configure the decision tree to use one of the three methods in defining the process flow. ABDF comes with it a large pool of algorithms, pre & post processors, process elements, visualization charts to make development of analytic solutions much easier. Through its ability to standardize the expected input and output formats for different algorithms built into ABDF, it is possible to integrate custom built algorithms with the framework and make it available to development community as needed. ABDF have defined a custom code development m ethodology, if followed will result in easy integration of custom build code with the framework.

    Amrita Internet of Things Middleware


    IoT Middleware along with AGway (in-house developed Intelligent Gateway) forms the core for AIoTm. It allows secure communication between heterogeneous devices using diverse protocols. Application developers can build their application without worrying about the underlying communication complexities and the complexities of the differences in protocols used by the devices. It also provides API to manage the middleware infrastructure and various service offered by the interface.

    Amrita Center for Cyber Security Systems and Networks


    Amrita Center for Cyber Security Systems and Networks promote partnership between industry, academia and the government to foster innovative research and education in Cyber Security, thus enhancing knowledge, deriving solutions, benefiting society and mitigating risks. The Center is supported by the Government of India throught many of its Departments and Mission REACH programs. The Center has been designated as a Center of Relevance and Excellence (CORE) for Cyber security in India. As a CORE, the Center aims to foster a core group of experts who can help disseminate knowledge about the ever-expanding frontiers of Cyber Security.

    Amrita Vishwa Vidyapeetham


    Amrita University is a multi-campus, multi-disciplinary research university that is accredited 'A' by NAAC and is ranked as one of the best research universities in India. The university is spread across five campuses in three states of India - Kerala, Tamil Nadu and Karnataka, with the University headquarters at Ettimadai, Coimbatore, Tamil Nadu. The university continuously collaborates with top US universities including Ivy league universities and top European universities for regular student exchange programs, and has emerged as one of the fastest growing institutions of higher learning in India.The university is managed by the Mata Amritanandamayi Math.

Supported Technologies