Big Data Professional Program

Build a foundational understanding of Big Data – its core concepts, technologies, processing frameworks, storage models, and real-world applications across industries.

Fill Out the Form Below to Get All the Details!

Edit Content

Payment Schedule of Big Data Course

  • 50% of the course fee is due at the time of registration.
  • 25% is payable after 60 days from the start of your classes.
  • The remaining 25% is due after 120 days from the start of your classes.
  • This allows you to split your payments while continuing your course without interruption.

Big Data Course Highlights

  • Makes Big Data easy to understand for non-technical learners.
  • Helps you explore career paths in data engineering, data analytics, and data science.
  • Improves your analytical skills and job readiness for IT and business roles alike.

About The Big Data Online Course

  • Entry-level program designed for beginners with no prior data experience.
  • Core coverage: Big Data fundamentals, data processing frameworks (Hadoop, Spark), storage systems (HDFS, NoSQL, Data Lakes).
  • Tools overview: Introduction to Hadoop ecosystem, Apache Spark, and modern data platforms.

Starting date - Saturday, September 13, 2025
Training Duration : 6 months (144 hrs).

Duration : 3 hours.
Class Mode : Online.

Schedule : Weekend.
Saturday & Sunday : 10 AM to 1 PM.

Course Overview and Curriculum Outline

Month 1: Foundations & Architecture (Weeks 1–4)

Edit Content
Saturday: Introduction to Big Data Concepts

What is Big Data?

0–30 min

The 5 V’s – Volume, Velocity, Variety, Veracity, Value

30–60 min

Importance of Big Data in Modern Enterprises

60–90 min

Evolution from Traditional Databases to Big Data

90–120 min

Use Cases in Finance, Retail, and Healthcare

120–150 min

Recap + Q&A

150–180 min

Sunday: Industry Case Studies & Applications

Case Study – Big Data in Banking

0–30 min

Case Study – Retail and Customer Analytics

30–60 min

Case Study – Healthcare Data Insights

60–90 min

Role of Big Data in Decision Making

90–120 min

Assignment Discussion – Industry Analysis

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Big Data Ecosystem & Frameworks

Components of Big Data Ecosystem

0–30 min

Hadoop Framework Overview

30–60 min

MapReduce Concepts

60–90 min

Big Data Analytics Lifecycle

90–120 min

Tools Overview – Spark, Hive, HBase

120–150 min

Recap + Q&A

150–180 min

Sunday: Big Data Architecture Overview

Layers of Big Data Architecture

0–30 min

Data Ingestion and ETL Process

30–60 min

Real-time vs Batch Processing

60–90 min

Architecture Components – Storage, Processing, Access

90–120 min

Hands-on: Simple Data Flow Design

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Data Types & Storage Systems

Structured, Semi-Structured, and Unstructured Data

0–30 min

HDFS Architecture and Components

30–60 min

Blocks, Namenode, Datanode Concept

60–90 min

Data Replication and Fault Tolerance

90–120 min

Hands-on: Understanding HDFS Commands

120–150 min

Recap + Q&A

150–180 min

Sunday: NoSQL Databases & Hive Overview

What is NoSQL?

0–30 min

Types of NoSQL – Key-Value, Document, Column

30–60 min

Overview of Hive and HBase

60–90 min

Querying Data with HiveQL

90–120 min:

Hands-on: Hive Table Creation and Queries

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Distributed Systems & Cloud Storage

Introduction to Distributed File Systems

0–30 min

How Data is Distributed Across Nodes

30–60 min

Cloud-based Storage – AWS S3, Azure Blob

60–90 min

Integrating Cloud with Hadoop

90–120 min

Hands-on: Store and Retrieve Data from Cloud

120–150 min

Recap + Q&A

150–180 min

Sunday: Review, Hands-on, Mock Test & Interview Prep

Recap of Big Data Foundations & Architecture

0–60 min

Hands-on: Set up Hadoop HDFS Cluster and Query in Hive/MongoDB

60–120 min

Mock Test + Interview Q&A: “Explain the Big Data Ecosystem”

120–180 min

Month 2: Big Data Processing & Analytics (Weeks 5–8)

Edit Content
Saturday: Hadoop MapReduce Fundamentals

Advanced MapReduce Concepts

0–30 min

Understanding Map and Reduce Functions

30–60 min

Job Execution Flow in Hadoop

60–90 min

WordCount Example Walkthrough

90–120 min

Hands-on: Create a Simple MapReduce Job

120–150 min

Recap + Q&A

150–180 min

Sunday: Advanced MapReduce Concepts

Combiner and Partitioner Functions

0–30 min

Custom Input and Output Formats

30–60 min

Job Configuration and Tuning

60–90 min

Handling Large Datasets with MapReduce

90–120 min

Hands-on: Analyze Large Log Files

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Apache Spark – Introduction

What is Apache Spark and Why It’s Popular

0–30 min

Spark Architecture – Driver, Executors, Cluster Manager

30–60 min

Understanding RDD (Resilient Distributed Dataset)

60–90 min

Transformations and Actions

90–120 min

Hands-on: WordCount in Spark

120–150 min

Recap + Q&A

150–180 min

Sunday: Spark SQL and DataFrames

Introduction to Spark SQL

0–30 min

Creating DataFrames from CSV and JSON Files

30–60 min

DataFrame Operations and Filters

60–90 min

SQL Queries on Spark Data

90–120 min

Hands-on: Query Dataset using Spark SQL

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Data Processing Pipelines in Spark

Understanding ETL Process in Big Data

0–30 min

Data Transformation using Spark

30–60 min

Joins, GroupBy, and Aggregations

60–90 min

Handling Missing and Skewed Data

90–120 min

Hands-on: Build Spark ETL Pipeline

120–150 min

Recap + Q&A

150–180 min

Sunday: Spark Streaming and Real-time Processing

Introduction to Spark Streaming

0–30 min

DStreams and Micro-batching

30–60 min

Integrating Kafka with Spark Streaming

60–90 min

Window Operations and Stateful Processing

90–120 min

Hands-on: Stream Processing Example

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Big Data Analytics & Visualization

Introduction to Data Analytics on Big Data

0–30 min

Connecting Spark with BI Tools

30–60 min

Using Tableau for Data Exploration

60–90 min

Interactive Dashboards and Visual Reports

90–120 min

Hands-on: Visualize Spark Results in Tableau

120–150 min

Recap + Q&A

150–180 min

Sunday: Review, Hands-on, Mock Test & Interview Prep

Recap of Spark, MapReduce, and Data Processing Concepts

0–60 min

Hands-on: Create End-to-End Big Data Pipeline

60–120 min

Mock Test + Interview Q&A: “Explain the Difference Between Hadoop and Spark”

120–180 min

Month 3: Processing Frameworks (Weeks 9–12)

Edit Content
Saturday: Data Ingestion – ETL Concepts

Introduction to Data Ingestion and ETL

0–30 min

Types of ETL Processes – Batch vs Stream

30–60 min

Data Flow Design for Ingestion

60–90 min

Extract and Transform Techniques

90–120 min

Hands-on: Build Simple ETL Pipeline

120–150 min

Recap + Q&A

150–180 min

Sunday: ETL Tools and Workflow Management

ETL in the Big Data Ecosystem

0–30 min

Overview of Apache NiFi and Airflow

30–60 min

Scheduling and Automation of ETL Jobs

60–90 min

Hands-on: NiFi Flow for CSV Data

90–120 min

Integration of Data Sources

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Hadoop & MapReduce – Deep Dive

Understanding Hadoop Architecture

0–30 min

MapReduce Workflow Review

30–60 min

InputSplit, Mapper, Reducer Explained

60–90 min

Data Flow Between Nodes

90–120 min

Hands-on: Build Custom MapReduce Job

120–150 min

Recap + Q&A

150–180 min

Sunday: Hadoop Advanced Features

Combiner and Partitioner Optimization

0–30 min

 Distributed Caching and Counters

30–60 min

Job Configuration Tuning

60–90 min

Monitoring Hadoop Jobs with YARN

90–120 min

Hands-on: Analyze Log Data with MapReduce

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Apache Spark Fundamentals

Spark Ecosystem Overview

0–30 min

Spark Architecture – Driver and Executors

30–60 min

RDDs and Lazy Evaluation

60–90 min

Transformations and Actions

90–120 min

Hands-on: Spark RDD Operations

120–150 min

Recap + Q&A

150–180 min

Sunday: Spark SQL and DataFrames

Introduction to DataFrames and Datasets

0–30 min

Creating DataFrames from Multiple Sources

30–60 min

SQL Queries on Structured Data

60–90 min

Aggregations and Joins in Spark SQL

90–120 min

Hands-on: Query JSON Data with Spark

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Real-Time Processing – Concepts

Introduction to Streaming Systems

0–30 min

Kafka Overview and Architecture

30–60 min

Producers, Topics, and Consumers Explained

60–90 min

Integrating Kafka with Spark Streaming

90–120 min

Hands-on: Stream Processing Example

120–150 min

Recap + Q&A

150–180 min

Sunday: Advanced Streaming Frameworks

Introduction to Flink and Storm

0–30 min

Flink Architecture and Event Processing

30–60 min

Building Stateful Stream Applications

60–90 min

Case Study: Real-Time Log Analytics

90–120 min

Hands-on: Build Simple Flink Stream

120–150 min

Recap + Mock Test + Interview Prep

150–180 min

Month 4: Analytics & Applications (Weeks 13–16)

Edit Content
Saturday: Introduction to Machine Learning on Big Data

Overview of ML in Big Data Ecosystems

0–30 min

Spark MLlib Introduction

30–60 min

Understanding ML Pipelines in Spark

60–90 min

Feature Engineering for Large Datasets

90–120 min

Hands-on: Linear Regression in Spark MLlib

120–150 min

Recap + Q&A

150–180 min

Sunday: Classification Techniques using Spark MLlib

Supervised Learning Overview

0–30 min

Logistic Regression and Decision Trees

30–60 min

Random Forests and Gradient Boosting

60–90 min

Model Evaluation and Cross-Validation

90–120 min

Hands-on: Classification on a Big Data Set

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Clustering and Unsupervised Learning in Spark

Unsupervised Learning Concepts

0–30 min

K-Means Clustering Algorithm

30–60 min

Hierarchical Clustering in Big Data

60–90 min

Dimensionality Reduction with PCA

90–120 min

Hands-on: Customer Segmentation using K-Means

120–150 min

Recap + Q&A

150–180 min

Sunday: Predictive Analytics on Large Datasets

What is Predictive Modeling?

0–30 min

Time Series Forecasting with Spark

30–60 min

Handling Model Scalability in Distributed Environments

60–90 min

Performance Optimization Techniques

90–120 min

Hands-on: Forecasting Use Case

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Industry Applications – Domain Use Cases

Introduction to AI in Industry

0–30 min

 Fraud Detection Models using Transaction Data

30–60 min

Predictive Maintenance in Manufacturing

60–90 min

Sentiment Analysis in Retail and Social Media

90–120 min

Healthcare Analytics with Big Data

120–150 min

Recap + Q&A

150–180 min

Sunday: Hands-on Industry Project

Define Project Problem Statement

0–30 min

Identify Data Sources and Preprocessing Steps

30–60 min

Implement MLlib Models

60–90 min

Visualize Results and Performance Metrics

90–120 min

Project Discussion and Peer Review

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Visualization & BI Tools – Tableau and Power BI

Introduction to Data Visualization in Big Data

0–30 min

Tableau Interface and Workflow

30–60 min

Creating Dashboards from Spark Data

60–90 min

Connecting Power BI with Big Data Sources

90–120 min

Hands-on: Build Interactive Dashboard

120–150 min

Recap + Q&A

150–180 min

Sunday: Review, Hands-on, Mock Test & Interview Prep

Recap of Spark MLlib and BI Concepts

0–60 min

Hands-on: Create End-to-End Analytics Dashboard

60–120 min

Mock Test + Interview Q&A: “How does MLlib handle distributed ML?”

120–180 min

Month 5: Tools & Ecosystem (Weeks 17–20)

Edit Content
Saturday: Hadoop Ecosystem Overview

Introduction to Hadoop Ecosystem and Its Components

0–30 min

Overview of Pig, Hive, and Oozie

30–60 min

Sqoop for Data Transfer Between RDBMS and Hadoop

60–90 min

Introduction to Zookeeper and Workflow Management

90–120 min

Hands-on: Data Load Using Sqoop

120–150 min

Recap + Q&A

150–180 min

Sunday: Hive and Pig Hands-on

Hive Architecture and Query Language (HiveQL)

0–30 min

Creating Databases and Tables in Hive

30–60 min

Query Optimization and Partitioning in Hive

60–90 min

Introduction to Pig Scripts for Data Transformation

90–120 min

Hands-on: ETL Workflow with Hive and Pig

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Workflow Orchestration with Oozie & Zookeeper

What is Workflow Scheduling?

0–30 min

Setting Up Oozie Workflows

30–60 min

Managing Jobs and Dependencies

60–90 min

Zookeeper in Coordination Services

90–120 min

Hands-on: Create an End-to-End Oozie Job

120–150 min

Recap + Q&A

150–180 min

Sunday: Hadoop Administration Essentials

Cluster Management and Configuration Files

0–30 min

Monitoring with YARN and ResourceManager

30–60 min

Troubleshooting and Log Analysis

60–90 min

Security and Access Controls in Hadoop

90–120 min

Hands-on: Manage Hadoop Cluster Nodes

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Cloud-Native Services Overview

Introduction to Cloud-based Data Platforms

0–30 min

Overview of AWS EMR, Google BigQuery, and Azure Synapse

30–60 min

Cloud Data Warehousing Concepts

60–90 min

Integrating Cloud Storage with Hadoop

90–120 min

Hands-on: Create and Query Dataset in BigQuery

120–150 min

Recap + Q&A

150–180 min

Sunday: Cloud Data Engineering

Using AWS EMR for Distributed Processing

0–30 min

Data Ingestion to Cloud Using S3 Buckets

30–60 min

Running Spark Jobs on EMR

60–90 min

Querying and Managing Data in Azure Synapse

90–120 min

Hands-on: Big Data ETL on Cloud Platform

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Emerging Data Technologies

Introduction to Databricks and Unified Analytics

0–30 min

Delta Lake Concepts and Architecture

30–60 min

Apache Iceberg for Table Format Management

60–90 min

Real-time Data Lakehouse Architecture

90–120 min

Hands-on: Implement Delta Lake Pipeline

120–150 min

Recap + Q&A

150–180 min

Sunday: Review, Hands-on, Mock Test & Interview Prep

Recap of Hadoop, Cloud Services, and Emerging Tools

0–60 min

Hands-on: End-to-End Data Workflow with Cloud + Delta Lake

60–120 min

Mock Test + Interview Q&A: “Compare Hadoop and Cloud-Native Data Solutions”

120–180 min

Month 6: Security & Governance (Weeks 21–24)

Edit Content
Saturday: Introduction to Big Data Security Frameworks

Why Big Data Security Matters

0–30 min

Common Threats in Big Data Environments

30–60 min

Overview of Security Layers – Data, Access, Network

60–90 min

Compliance Requirements – GDPR, HIPAA, DPDP

90–120 min

Security Policy Design in Big Data Architecture

120–150 min

Recap + Q&A

150–180 min

Sunday: Authentication & Authorization in Big Data

Identity and Access Control Models

0–30 min

Understanding Role-Based Access Control (RBAC)

30–60 min

Integrating Cloud IAM (AWS & Azure)

60–90 min

Multi-Factor Authentication and Token-based Access

90–120 min

Hands-on: Configure IAM Roles for Data Access

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Apache Ranger and Knox Implementation

Introduction to Apache Ranger

0–30 min

Policy-based Access Control

30–60 min

Ranger Plugins and Auditing Features

60–90 min

Hands-on: Configure Ranger for HDFS and Hive

90–120 min

Integration of Ranger with Kerberos

120–150 min

Recap + Q&A

150–180 min

Sunday: Secure Gateway with Apache Knox

Introduction to Knox Gateway

0–30 min

Configuring REST APIs through Knox

30–60 min

Integrating Knox with Hadoop Cluster

60–90 min

SSL/TLS Configuration for Secure Communication

90–120 min

Hands-on: Enable Knox Authentication Layer

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Kerberos Authentication Systems

Overview of Kerberos Protocol

0–30 min

Key Distribution Center (KDC) Explained

30–60 min

Integrating Kerberos with Hadoop Cluster

60–90 min

Hands-on: Configure Kerberos Authentication

90–120 min

Troubleshooting Common Kerberos Issues

120–150 min

Recap + Q&A

150–180 min

Sunday: Metadata Management & Data Catalogs

Importance of Metadata in Big Data

0–30 min

Apache Atlas Overview

30–60 min

Metadata Lineage Tracking

60–90 min

Data Cataloging and Discovery

90–120 min

Hands-on: Create and Query Metadata in Atlas

120–150 min

Recap + Q&A

150–180 min

Edit Content
Saturday: Cloud IAM Integration & Data Governance

Introduction to Cloud IAM (AWS, Azure)

0–30 min

IAM Policies, Permissions, and Role Hierarchies

30–60 min

Data Governance Models for Enterprises

60–90 min

Aligning IAM with Compliance Requirements

90–120 min

Hands-on: Configure IAM Roles in AWS/Azure

120–150 min

Recap + Q&A

150–180 min

Sunday: Review, Hands-on, Mock Test & Interview Prep

Recap of Ranger, Knox, and Kerberos

0–60 min

Hands-on: End-to-End Secure Big Data Architecture

60–120 min

Mock Test + Interview Q&A: “How is Big Data Security Managed in Enterprises?”

120–180 min

The world is driven by data. From social media interactions to enterprise operations, vast amounts of information are being generated every second. This explosion of data has transformed how organizations make decisions, innovate, and deliver value. For aspiring IT professionals, analysts, and graduates, understanding Big Data is no longer optional—it’s essential. At Viewsoft Academy, we’ve designed a comprehensive Fundamentals of Big Data program to equip you with the skills and knowledge needed to thrive in this data-driven world.

It’s more than just another course-it’s your gateway to a high-impact career in data. We’ve built a clear, practical curriculum that simplifies complex Big Data concepts, technologies, and real-world applications. Whether you’re a recent graduate aiming to stand out or a working professional looking to upskill or reskill, this program offers the perfect launchpad into the world of Big Data and analytics.

Why Big Data is the Future?

The words Big Data are no longer just a buzzword—they represent the foundation of modern business intelligence and decision-making. Organizations of all sizes now rely on data-driven insights to enhance efficiency, predict trends, and gain a competitive edge. This growing dependence on data has created a massive demand for professionals who can collect, process, and analyze large-scale data effectively.

Earning a Big Data certification is one of the most powerful ways to demonstrate your expertise and stand out in the job market. It validates your ability to handle real-world data challenges and leverage advanced tools to uncover insights. Our program prepares you for industry-recognized certifications, such as Cloudera Certified Associate (CCA) and Databricks Data Engineer, while giving you the strong foundational knowledge needed to pursue specialized Big Data and analytics credentials with confidence.

Why is the Viewsoft Academy Program the Right One?

At Viewsoft Academy, we believe in practical, hands-on learning. Our Big Data curriculum is designed and delivered by industry professionals who have led large-scale data projects and implemented enterprise analytics solutions. We don’t just teach theory—we demonstrate real-world applications and help you build the skills to practice them confidently.

Multifaceted Curriculum: Our Big Data Fundamentals course covers everything from data collection and storage to processing and analytics. You’ll explore distributed computing frameworks like Hadoop and Spark, data storage systems such as HDFS, NoSQL, and Data Lakes, along with essentials of data pipelines, visualization, and governance.

Relevant Industry Content: We continually update our course content to reflect the latest industry trends and technologies, ensuring that what you learn is immediately applicable in real-world data environments.

Skilled Instructors: Learn from experienced data professionals who bring practical insights and case studies from domains like finance, healthcare, and e-commerce—making your learning journey engaging and industry-relevant.

Flexible Learning: Designed to fit your schedule, our program offers online access to lectures, labs, and assignments, allowing you to learn anytime, anywhere, and at your own pace.

Who is to take part in this Program?

This program is perfect for:

IT Professionals who want to transition into data-driven roles or manage large-scale data systems. The Viewsoft Academy Certificate in Big Data validates your competence and opens doors to advanced career opportunities in analytics and data engineering.

Computer Science and Information Technology Graduates, as well as those from related fields, who wish to strengthen their technical resumes with a recognized Big Data certification, making them more attractive to top technology employers.

Beginners and career changers looking to build a strong foundation in data technologies and launch a successful career in one of today’s fastest-growing and most impactful fields.

The Ultimate Guide to Highest Certifications and your future.

Although this course focuses on the fundamentals of Big Data, it also serves as a powerful stepping stone toward more advanced and specialized areas in data engineering and analytics. The concepts and tools covered here directly align with major industry-recognized Big Data certifications. For instance, our program acts as a strong foundation for credentials such as the Cloudera Certified Associate (CCA), Databricks Data Engineer, and Google Cloud Data Engineer certifications.

As the Big Data landscape continues to evolve, certifications in technologies like Apache Spark, Hadoop, and AWS Big Data are gaining increasing recognition. Our curriculum is carefully designed to remain vendor-neutral, giving you a comprehensive understanding of Big Data principles and systems that can be applied across platforms. By the end of this course, you’ll have the clarity, competence, and confidence to pursue the most relevant Big Data certifications aligned with your career goals.

Why This is the Opportunity You can not Afford to Miss.

Big Data professionals are in exceptionally high demand—and the opportunities are growing rapidly. Waiting too long to enter this field means missing out on high-paying roles and remarkable career growth. Our Big Data program is designed to make you market-ready quickly and effectively, equipping you with the practical skills and technical expertise needed to handle real-world data challenges and deliver measurable value to any organization.

Join the thousands of professionals who have trusted Viewsoft Academy to advance their careers. We remain committed to delivering high-quality, industry-relevant education and supporting the academic and professional success of every learner who embarks on this data-driven journey with us.

Frequently Asked Questions on The Fundamentals of Big Data By Viewsoft Academy

Question 1. Is there a prerequisite to taking the Fundamentals of Big Data?
Answer: The course is beginner-friendly and does not require any prior Big Data experience. A basic understanding of IT concepts is helpful, but the curriculum starts from the fundamentals. It is best suited for graduate students, recent graduates, or IT professionals who want to build a strong foundation in Big Data and develop valuable, industry-relevant skills.
Answer: The program can be self-paced, and thus, you can learn at your own pace. Students, on average, finish the course in 8-12 weeks, spending between 5-7 hours per week on video lectures and practical work in the labs. The course material is accessible to you throughout life.
Answer: After successfully completing all modules and assessments, you will be eligible to receive an official Big Data certificate from Viewsoft Academy. This professional certificate validates your fundamental understanding of Big Data concepts and serves as a valuable credential to enhance your resume and professional profiles.
Answer: Our program does not guarantee employment, but it equips you with the essential skills and foundational knowledge required for entry-level roles in Big Data and analytics. It also serves as a stepping stone toward formal industry certifications, such as Cloudera Certified Associate (CCA) and Google Cloud Data Engineer, which are highly valued in today’s job market.
Answer: The course includes numerous practical exercises and hands-on labs that allow you to work directly with Big Data technologies. You will apply theory to practice, developing skills in data processing, storage, analytics, and pipeline management, ensuring you are prepared to handle real-world projects and confidently tackle interview questions.
Answer: Our trainers are seasoned data professionals with extensive experience in designing, managing, and implementing Big Data solutions in large organizations. They bring real-world insights into the curriculum, providing practical perspectives that go beyond theory and prepare you to tackle challenges in today’s data-driven industry.