Trending March 2024 # Beginners Guide To Artificial Neural Network # Suggested April 2024 # Top 4 Popular

You are reading the article Beginners Guide To Artificial Neural Network updated in March 2024 on the website Cattuongwedding.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested April 2024 Beginners Guide To Artificial Neural Network

This article was published as a part of the Data Science Blogathon

Introduction

machine. It is inspired by the working of a human brain and therefore is a set of neural network algorithms which tries to mimics the working of a human brain and learn from the experiences.

In this article, we are going to learn about how a basic Neural Network works and how it improves itself to make the best predictions.

Table of Content

Neural networks and their components

Perceptron and Multilayer Perceptron

Step by Step Working of Neural Network

Back Propagation and how it works

Brief about Activation Functions

Artificial Neural Networks and Its components

Neural Networks is a computational learning system that uses a network of functions to understand and translate a data input of one form into a desired output, usually in another form. The concept of the artificial neural network was inspired by human biology and the way neurons of the human brain function together to understand inputs from human senses.

In simple words, Neural Networks are a set of algorithms that tries to recognize the patterns, relationships, and information from the data through the process which is inspired by and works like the human brain/biology.

Components / Architecture of Neural Network

A simple neural network consists of three components :

Input layer

Hidden layer

Output layer

                                                                          Source: Wikipedia

Input Layer: Also known as Input nodes are the inputs/information from the outside world is provided to the model to learn and derive conclusions from. Input nodes pass the information to the next layer i.e Hidden layer.

Hidden Layer: Hidden layer is the set of neurons where all the computations are performed on the input data. There can be any number of hidden layers in a neural network. The simplest network consists of a single hidden layer.

Output layer: The output layer is the output/conclusions of the model derived from all the computations performed. There can be single or multiple nodes in the output layer. If we have a binary classification problem the output node is 1 but in the case of multi-class classification, the output nodes can be more than 1.

Perceptron and Multi-Layer Perceptron

Perceptron is a simple form of Neural Network and consists of a single layer where all the mathematical computations are performed.

Whereas, Multilayer Perceptron also known as Artificial Neural Networks consists of more than one perception which is grouped together to form a multiple layer neural network.

                                                                       Source: Medium

In the above image, The Artificial Neural Network consists of four layers interconnected with each other:

An input layer, with 6 input nodes

Hidden Layer 1, with 4 hidden nodes/4 perceptrons

Hidden layer 2, with 4 hidden nodes

Output layer with 1 output node

Step by Step Working of the Artificial Neural Network 

In the first step, Input units are passed i.e data is passed with some weights attached to it to the hidden layer. We can have any number of hidden layers. In the above image inputs x1,x2,x3,….xn is passed.

Each hidden layer consists of neurons. All the inputs are connected to each neuron.

After passing on the inputs, all the computation is performed in the hidden layer (Blue oval in the picture)

Computation performed in hidden layers are done in two steps which are as follows :

First of all, all the inputs are multiplied by their weights. Weight is the gradient or coefficient of each variable. It shows the strength of the particular input. After assigning the weights, a bias variable is added. Bias is a constant that helps the model to fit in the best way possible.

Z1 = W1*In1 + W2*In2 + W3*In3 + W4*In4 + W5*In5 + b

W1, W2, W3, W4, W5 are the weights assigned to the inputs In1, In2, In3, In4, In5, and b is the bias.

Then in the second step, the activation function is applied to the linear equation Z1. The activation function is a nonlinear transformation that is applied to the input before sending it to the next layer of neurons. The importance of the activation function is to inculcate nonlinearity in the model.

There are several activation functions that will be listed in the next section.

    The whole process described in point 3 is performed in each hidden layer. After passing through every hidden layer, we move to the last layer i.e our output layer which gives us the final output.

    The process explained above is known as forwarding Propagation.

      After getting the predictions from the output layer, the error is calculated i.e the difference between the actual and the predicted output.

      If the error is large, then the steps are taken to minimize the error and for the same purpose, Back Propagation is performed.

      What is Back Propagation and How it works?

      Back Propagation is the process of updating and finding the optimal values of weights or coefficients which helps the model to minimize the error i.e difference between the actual and predicted values.

      But here are the question is: How the weights are updated and new weights are calculated?

      The weights are updated with the help of optimizers. Optimizers are the methods/ mathematical formulations to change the attributes of neural networks i.e weights to minimizer the error.

      Back Propagation with Gradient Descent 

      Gradient Descent is one of the optimizers which helps in calculating the new weights. Let’s understand step by step how Gradient Descent optimizes the cost function.

      In the image below, the curve is our cost function curve and our aim is the minimize the error such that Jmin i.e global minima is achieved.

                                                        Source: Quora

      Steps to achieve the global minima:

      First, the weights are initialized randomly i.e random value of the weight, and intercepts are assigned to the model while forward propagation and the errors are calculated after all the computation. (As discussed above)

      Then the gradient is calculated i.e derivative of error w.r.t current weights

      Then new weights are calculated using the below formula, where a is the learning rate which is the parameter also known as step size to control the speed or steps of the backpropagation. It gives additional control on how fast we want to move on the curve to reach global minima.

             4.This process of calculating the new weights, then errors from the new weights, and then updation of weights continues till we reach global minima and loss is minimized. 

      A point to note here is that the learning rate i.e a in our weight updation equation should be chosen wisely. Learning rate is the amount of change or step size taken towards reaching global minima. It should not be very small as it will take time to converge as well as it should not be very large that it doesn’t reach global minima at all. Therefore, the learning rate is the hyperparameter that we have to choose based on the model.

                                                                            Source: Educative.io

      To know the detailed maths and the chain rule of Backpropagation, refer to the attached tutorial.

      Brief about Activation Functions

      Activation functions are attached to each neuron and are mathematical equations that determine whether a neuron should be activated or not based on whether the neuron’s input is relevant for the model’s prediction or not. The purpose of the activation function is to introduce the nonlinearity in the data.

      Various Types of Activation Functions are :

      Sigmoid Activation Function

      TanH / Hyperbolic Tangent Activation Function

      Rectified Linear Unit Function (ReLU)

      Leaky ReLU

      Softmax

      Refer to this blog for a detailed explanation of Activation Functions.

      End Notes

       Here I conclude my step-by-step explanation of the first Neural Network of Deep Learning which is ANN. I tried to explain the process of Forwarding propagation and Backpropagation in the simplest way possible. I hope to go through this article was worth your time 🙂

      Please feel free to connect with me on LinkedIn and share your valuable inputs. Kindly refer to my other articles here.

      About the Author

      I am Deepanshi Dhingra currently working as a Data Science Researcher, and possess knowledge of Analytics, Exploratory Data Analysis, Machine Learning, and Deep Learning.

      The media shown in this article on Artificial Neural Network are not owned by Analytics Vidhya and is used at the Author’s discretion.

      Related

      You're reading Beginners Guide To Artificial Neural Network

      Neural Network 101 – Ultimate Guide For Starters

      Date: 03-July-2040

      Mission: Project Aries

      Destination: Mars

      Date of arrival to Mars: 18-Feb-2041

      Landing Location: Jezero Crater, Mars 

      “Imagine you are on a space mission to go to mars as a part of “Project Aries”. You are in a spaceship along with your crew (8 in total) along with an ASI(Artificial Super Intelligence) let’s called it “HAL9000″. You are drifting through the vast vacuum of the universe millions of miles away from earth. In order to preserve your valuable resources like energy and resources like oxygen and water, you along with your crew enter into a deep sleep state for 4 months. In the meanwhile, your onboard ASI will be monitoring and controlling all operations of your spacecraft. Now, what if HAL9000 considers you and your crew as a threat to its existence and decided to sabotage the mission. Scary isn’t it ?. I am sure you would have figured out which movie this is relating to. That’s right! I am talking about 2001: A Space Odyssey. For those of you who do not know what is HAL9000, well this is HAL9000.” Love that glowing red eye !!

      You might ask what is this has to do with neural networks. Well technically HAL9000 is termed as an “Artificial Super Intelligence”, but in a very simple term, it’s a neural network which is the topic of this blog. so let’s dive into the realm of neural networks.

      There are several definitions of neural networks. A few of them includes the following:

      Neural networks or also known as Artificial Neural Networks (ANN) are networks that utilize complex mathematical models for information processing. They are based on the model of the functioning of neurons and synapses in the brain of human beings. Similar to the human brain, a neural network connects simple nodes, also known as neurons or units. And a collection of such nodes forms a network of nodes, hence the name “neural network.” – hackr.io

      Well that’s a lot of stuff to consume

      For anyone starting with a neural network, let’s create our own simple definition of neural networks. Let’s split these words into two parts.

      Network means it is an interconnection of some sort between something. What is something we will see this later down the road?

      Neural means neurons. What are neurons? let me explain this shortly.

      So neural network means the network of neurons. That’s it. You might ask “Why are we discussing biology in neural networks?”. Well in the data science realm, when we are discussing neural networks, those are basically inspired by the structure of the human brain hence the name.

      Another important thing to consider is that individual neurons themselves cannot do anything. It is the collection of neurons where the real magic happens.

      Neural Network in Data Science Universe

      You might have a question “Where is neural network stands in the vast Data Science Universe?”.Let’s find this out with the help of a diagram.

      In this diagram, what are you seeing? Under Data Science, we have Artificial Intelligence. Consider this as an umbrella. Under this umbrella, we have machine learning( a sub-field of AI). Under this umbrella, we have another umbrella named “Deep Learning” and this is the place where the neural network exists. (Dream inside of another dream 🙂 classical inception stuff )

      Basically, deep learning is the sub-field of machine learning that deals with the study of neural networks. Why the study of neural networks called “Deep Learning”?. Well, read this blog further to know more 🙂

      Structure of Neural Networks

      Since we already said that neural networks are something that is inspired by the human brain let’s first understand the structure of the human brain first.

      Each neuron composed of three parts:-

      1.Axon

      2.Dendrites

      3.Body

      As I explained earlier, neuron works in association with each other. Each neuron receives signals from another neuron and this is done by Dendrite. Axon is something that is responsible for transmitting output to another neuron. Those Dendrites and Axons are interconnected with the help of the body(simplified term). Now let’s understand its relevance to our neural network with the one used in the data science realm.

      In any neural network, there are 3 layers present:

      1.Input Layer: It functions similarly to that of dendrites. The purpose of this layer is to accept input from another neuron.

      2.Hidden Layer: These are the layers that perform the actual operation

      3.Output Layer: It functions similarly to that of axons. The purpose of this layer to transmit the generated output to other neurons.

      One thing to be noted here is that in the above diagram we have 2 hidden layers. But there is no limit on how many hidden layers should be here. It can be as low as 1 or as high as 100 or maybe even 1000!

      Now it’s time to answer our question. “Why the study of neural networks called Deep Learning”?Well, the answer is right in the figure itself 🙂

      It is because of the presence of multiple hidden layers in the neural network hence the name “Deep”. Also after creating the neural network, we have to train it in order to solve the problem hence the name “Learning”. Together these two constitute “Deep Learning”

      Ingredients of Neural Network

      As Deep Learning is a sub-field of Machine Learning, the core ingredients will be the same. These ingredients include the following:

      1.Data:- Information needed by neural network

      2.Model:- Neural network itself

      3.Objective Function:- Computes how close or far our model’s output from the expected one

      4.Optimisation Algorithm:-Improving performance of the model through a loop of trial and error

      The first two ingredients are quite self-explanatory. Let’s get familiar with objective functions.

      Objective Function

      The purpose of the objective function is to calculate the closeness of the model’s output to the expected output. In short, it computes the accuracy of our neural network. In this regard, there are basically two types of objective functions.

      1. Loss Function:-

      To understand loss function, let me explain this with the help of an example. Imagine you have a Roomba(A rover that cleans your house). For those who do not know what Roomba is, well this is Roomba.

      Let’s call our Roomba “Mr.robot”. Mr. robot’s job is to clean the floor when it senses any dirt. Now since Mr.robot is battery-operated, each time it functions, it consumes its battery power. So in this context what is the ideal condition in which Mr.robot should operate? Well by consuming minimum possible energy but at the same time doing its job efficiently. That is the idea behind loss function.

      The lower the value of the loss function, the better is the accuracy of our neural network.

      2. Reward Function:

      Let me explain this with the help of another example.

      Let’s say you are teaching your dog to fetch a stick. Every time when your dog fetches a stick, you award it let’s say a bone. Well that is the concept behind the reward function

      Higher the value, the better the accuracy of our neural network.

      Optimization Algorithm

      Any machine learning algorithm is incomplete without an optimization algorithm. The main goal of an optimization algorithm is to subject our ML model (in this case neural network) to a series of trial and error processes which eventually results in a model having higher accuracy.

      In the context of neural networks, we use a specific optimization algorithm called gradient descent. Let’s understand this with the help of an example.

      let’s imagine that we are climbing down a hill. With each step, we can feel that we are reaching a flat surface. Once we reach a flat surface, we no longer feel that strain on our fleet. Well, similar is the concept of gradient descent.

      In gradient descent, there are few terms that we need to understand. In our previous example, when we climb down the hill we reach a flat surface. In gradient descent, we call this global minimum. Now, what do global minima mean? If you used a loss function, it means the point at which you have a minimum loss and is the preferred one.

      Alternatively, if you are going to use a reward function, then our goal is to reach a point where the reward is maximum ( means reaching a global maximum). In that case, we have to use something called gradient ascent. Think of it as an opposite to gradient descent. Meaning that now we need to climb up the hill in order to reach its peak 🙂

      Types of Neural Networks

      There are many different types of neural networks. Few of the popular one includes following

      Let me give you a single liner about where those neural networks are used

      1.Convolutional Neural Network(CNN): used in image recognition and classification

      2.Artificial Neural Network(ANN): used in image compression

      3.Restricted Boltzmann Machine(RBM): used for a variety of tasks including classification, regression, dimensionality reduction

      4.Generative Adversarial Network(GAN): used for fake news detection, face detection, etc.

      5.Recurrent Neural Network(RNN): used in speech recognition

      6.Self Organizing Maps(SOM): used for topology analysis

      Applications of Neural Network in Real Life

      In this part, let’s get familiar with the application of neural networks

      1.Adaptive Battery in Android OS

      If you happened to have an android phone running android os 9.0 or above, when you go inside the setting menu under the battery section you will see an option for an adaptive battery. What this feature does is pretty remarkable. This feature basically uses Convolutional Neural Networks(CNN) to identify which apps in your phone are consuming more power and based on that, it will restrict those apps.

      2. Live Caption in Android OS

      As a part of Android OS 10.0, Google introduced a feature called Live Caption. When enabled this feature uses a combination of CNN and RNN to recognize the video and generate a caption for the same in real-time

      3. Face Unlock

      Today almost any newly launched android phone is using some sort of face unlock to speed up the unlocking process. Here essentially CNN’s are used to help identify your face. That’s why you can observe that the more you use face unlock, the better it becomes over time.

      4.Google Camera Portrait Mode

      Do you have google pixel? Wondering why it takes industry-leading bokeh shots. Well, you can thank the integration of CNN into google camera for that 🙂

      5.Google Assistant

      Wonder how Google assistant wakes after saying “Ok Google”.Don’t say this loudly. You might invoke someone’s google assistant :). It uses RNN for this wake word detection.

      Well, this is it. This is all you need to know about neural networks as a starter.

      I hope you like this article.

      If you like this article please share this with your friends and colleagues.

      Related

      A Beginners Guide To Multi

      This article was published as a part of the Data Science Blogathon

      In the era of Big Data, Python has become the most sought-after language. In this article, let us concentrate on one particular aspect of Python that makes it one of the most powerful Programming languages- Multi-Processing.

      Now before we dive into the nitty-gritty of Multi-Processing, I suggest you read my previous article on Threading in Python, since it can provide a better context for the current article.

      Let us say you are an elementary school student who is given the mind-numbing task of multiplying 1200 pairs of numbers as your homework. Let us say you are capable of multiplying a pair of numbers within 3 seconds. Then on a total, it takes 1200*3 = 3600 seconds, which is 1 hour to solve the entire assignment.  But you have to catch up on your favorite TV show in 20 minutes.

      What would you do? An intelligent student, though dishonest, will call up three more friends who have similar capacity and divide the assignment. So you’ll get 250 multiplications tasks on your plate, which you’ll complete in 250*3 = 750 seconds, that is 15 minutes. Thus, you along with your 3 other friends, will finish the task in 15 minutes, giving you 5 minutes time to grab a snack and sit for your TV show. The task just took 15 minutes when 4 of you work together, which otherwise would have taken 1 hour.

      This is the basic ideology of Multi-Processing. If you have an algorithm that can be divided into different workers(processors), then you can speed up the program. Machines nowadays come with 4,8 and 16 cores, which then can be deployed in parallel.

      Multi-Processing in Data Science-

      Multi-Processing has two crucial applications in Data Science.

      1. Input-Output processes-

      Any data-intensive pipeline has input, output processes where millions of bytes of data flow throughout the system. Generally, the data reading(input) process won’t take much time but the process of writing data to Data Warehouses takes significant time. The writing process can be made in parallel, saving a huge amount of time.

      2. Training models

      Though not all models can be trained in parallel, few models have inherent characteristics that allow them to get trained using parallel processing. For example, the Random Forest algorithm deploys multiple Decision trees to take a cumulative decision. These trees can be constructed in parallel. In fact, the sklearn API comes with a parameter called n_jobs, which provides an option to use multiple workers.

      Multi-Processing in Python using Process class-

      Now let us get our hands on the multiprocessing library in Python.

      Take a look at the following code

      Python Code:

      

      The above code is simple. The function sleepy_man sleeps for a second and we call the function two times. We record the time taken for the two function calls and print the results. The output is as shown below.

      Starting to sleep Done sleeping Starting to sleep Done sleeping Done in 2.0037 seconds

      This is expected as we call the function twice and record the time. The flow is shown in the diagram below.

      Now let us incorporate Multi-Processing into the code.

      import multiprocessing import time def sleepy_man(): print('Starting to sleep') time.sleep(1) print('Done sleeping') tic = time.time() p1 = multiprocessing.Process(target= sleepy_man) p2 = multiprocessing.Process(target= sleepy_man) p1.start() p2.start() toc = time.time() print('Done in {:.4f} seconds'.format(toc-tic))

      Here multiprocessing.Process(target= sleepy_man) defines a multi-process instance. We pass the required function to be executed, sleepy_man, as an argument. We trigger the two instances by p1.start().

      The output is as follows-

      Done in 0.0023 seconds Starting to sleep Starting to sleep Done sleeping Done sleeping

      Now notice one thing. The time log print statement got executed first. This is because along with the multi-process instances triggered for the sleepy_man function, the main code of the function got executed separately in parallel. The flow diagram given below will make things clear.

      In order to execute the rest of the program after the multi-process functions are executed, we need to execute the function join().

      import multiprocessing import time def sleepy_man(): print('Starting to sleep') time.sleep(1) print('Done sleeping') tic = time.time() p1 = multiprocessing.Process(target= sleepy_man) p2 = multiprocessing.Process(target= sleepy_man) p1.start() p2.start() p1.join() p2.join() toc = time.time() print('Done in {:.4f} seconds'.format(toc-tic))

      Now the rest of the code block will only get executed after the multiprocessing tasks are done. The output is shown below.

      Starting to sleep Starting to sleep Done sleeping Done sleeping Done in 1.0090 seconds

      The flow diagram is shown below.

      Since the two sleep functions are executed in parallel, the function together takes around 1 second.

      We can define any number of multi-processing instances. Look at the code below. It defines 10 different multi-processing instances using a for a loop.

      import multiprocessing import time def sleepy_man(): print('Starting to sleep') time.sleep(1) print('Done sleeping') tic = time.time() process_list = [] for i in range(10): p = multiprocessing.Process(target= sleepy_man) p.start() process_list.append(p) for process in process_list: process.join() toc = time.time() print('Done in {:.4f} seconds'.format(toc-tic))

      The output for the above code is as shown below.

      Starting to sleep Starting to sleep Starting to sleep Starting to sleep Starting to sleep Starting to sleep Starting to sleep Starting to sleep Starting to sleep Starting to sleep Done sleeping Done sleeping Done sleeping Done sleeping Done sleeping Done sleeping Done sleeping Done sleeping Done sleeping Done sleeping Done in 1.0117 seconds

      Here the ten function executions are processed in parallel and thus the entire program takes just one second. Now my machine doesn’t have 10 processors. When we define more processes than our machine, the multiprocessing library has a logic to schedule the jobs. So you don’t have to worry about it.

      We can also pass arguments to the Process function using args.

      import multiprocessing import time def sleepy_man(sec): print('Starting to sleep') time.sleep(sec) print('Done sleeping') tic = time.time() process_list = [] for i in range(10): p = multiprocessing.Process(target= sleepy_man, args = [2]) p.start() process_list.append(p) for process in process_list: process.join() toc = time.time() print('Done in {:.4f} seconds'.format(toc-tic))

      The output for the above code is as shown below.

      Starting to sleep Starting to sleep Starting to sleep Starting to sleep Starting to sleep Starting to sleep Starting to sleep Starting to sleep Starting to sleep Starting to sleep Done sleeping Done sleeping Done sleeping Done sleeping Done sleeping Done sleeping Done sleeping Done sleeping Done sleeping Done sleeping Done in 2.0161 seconds

      Since we passed an argument, the sleepy_man function slept for 2 seconds instead of 1 second.

      Multi-Processing in Python using Pool class-

      In the last code snippet, we executed 10 different processes using a for a loop. Instead of that we can use the Pool method to do the same.

      import multiprocessing import time def sleepy_man(sec): print('Starting to sleep for {} seconds'.format(sec)) time.sleep(sec) print('Done sleeping for {} seconds'.format(sec)) tic = time.time() pool = multiprocessing.Pool(5) pool.map(sleepy_man, range(1,11)) pool.close() toc = time.time() print('Done in {:.4f} seconds'.format(toc-tic))

      multiprocessing.Pool(5) defines the number of workers. Here we define the number to be 5. pool.map() is the method that triggers the function execution. We call pool.map(sleepy_man, range(1,11)). Here, sleepy_man  is the function that will be called with the parameters for the functions executions defined by range(1,11)  (generally a list is passed). The output is as follows-

      Starting to sleep for 1 seconds Starting to sleep for 2 seconds Starting to sleep for 3 seconds Starting to sleep for 4 seconds Starting to sleep for 5 seconds Done sleeping for 1 seconds Starting to sleep for 6 seconds Done sleeping for 2 seconds Starting to sleep for 7 seconds Done sleeping for 3 seconds Starting to sleep for 8 seconds Done sleeping for 4 seconds Starting to sleep for 9 seconds Done sleeping for 5 seconds Starting to sleep for 10 seconds Done sleeping for 6 seconds Done sleeping for 7 seconds Done sleeping for 8 seconds Done sleeping for 9 seconds Done sleeping for 10 seconds Done in 15.0210 seconds

      Pool class is a  better way to deploy Multi-Processing because it distributes the tasks to available processors using the First In First Out schedule. It is almost similar to the map-reduce architecture- in essence, it maps the input to different processors and collects the output from all processors as a list. The processes in execution are stored in memory and other non-executing processes are stored out of memory.

      Whereas in Process class, all the processes are executed in memory and scheduled execution using FIFO policy.

      Comparing the time performance for calculating perfect numbers-

       

      Using a regular for a loop- import time def is_perfect(n): sum_factors = 0 for i in range(1, n): if (n % i == 0): sum_factors = sum_factors + i if (sum_factors == n): print('{} is a Perfect number'.format(n)) tic = time.time() for n in range(1,100000): is_perfect(n) toc = time.time() print('Done in {:.4f} seconds'.format(toc-tic))

      The output for the above program is shown below.

      6 is a Perfect number 28 is a Perfect number 496 is a Perfect number 8128 is a Perfect number Done in 258.8744 seconds Using a Process class- import time import multiprocessing def is_perfect(n): sum_factors = 0 for i in range(1, n): if(n % i == 0): sum_factors = sum_factors + i if (sum_factors == n): print('{} is a Perfect number'.format(n)) tic = time.time() processes = [] for i in range(1,100000): p = multiprocessing.Process(target=is_perfect, args=(i,)) processes.append(p) p.start() for process in processes: process.join() toc = time.time() print('Done in {:.4f} seconds'.format(toc-tic))

      The output for the above program is shown below.

      6 is a Perfect number 28 is a Perfect number 496 is a Perfect number 8128 is a Perfect number Done in 143.5928 seconds

      As you could see, we achieved a 44.4% reduction in time when we deployed Multi-Processing using Process class, instead of a regular for loop.

      Using a Pool class- import time import multiprocessing def is_perfect(n): sum_factors = 0 for i in range(1, n): if(n % i == 0): sum_factors = sum_factors + i if (sum_factors == n): print('{} is a Perfect number'.format(n)) tic = time.time() pool = multiprocessing.Pool() pool.map(is_perfect, range(1,100000)) pool.close() toc = time.time() print('Done in {:.4f} seconds'.format(toc-tic))

      The output for the above program is shown below.

      6 is a Perfect number 28 is a Perfect number 496 is a Perfect number 8128 is a Perfect number Done in 74.2217 seconds

      As you could see, compared to a regular for loop we achieved a 71.3% reduction in computation time, and compared to the Process class, we achieve a 48.4% reduction in computation time.

      Thus, it is very well evident that by deploying a suitable method from the multiprocessing library, we can achieve a significant reduction in computation time.

      The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion. 

      Beginners Guide To Resetting Android Phones – Webnots

      Generally Android smartphones are reliable, and you can fix most of the issues by configuring the settings. However, there are also occasions when resetting the phone is necessary to fix a problem like frequent app crashing. Similarly, factory reset is a good option before selling the phone to ensure all your data is erased and cannot be accessed by someone else. Whether you want to fix a problem with your phone or erase all of your data, here is a complete guide to resetting Android phones, including the factory reset.

      Reset Options in Android

      Network – reset network connections like Wi-Fi, mobile, Bluetooth, etc.

      Apps – delete app preferences and set to initial state.

      SIM – delete downloaded SIM data.

      Factory reset – delete everything and set to factory settings like when you purchased it as a new phone.

      We will explain all these options in the following section. However, you can choose one of these options based on your need. As mentioned, do the factory reset before selling your phone. Similarly, resetting network is an option to fix unstable Wi-Fi connection problems.

      Reset Options in Android

      Note: The menu path may be different in your phone depending on the manufacturer. If required, you can refer the instruction manual of your phone to find where the resetting page is available.

      1. Reset Wi-Fi, Mobile & Bluetooth

      Resetting Wi-Fi, mobile & Bluetooth might help to fix networking issues which you cannot solve by other ways. Moreover, you can use this option when your permanently relocate your office or house and want to delete all saved Wi-Fi passwords and other network related keys from your phone. In addition, resetting Wi-Fi, mobile & Bluetooth will delete all the paired Bluetooth devices and reset the network settings of apps. However, this will not delete any personal information, and all your media files will remain intact.

      To do the network resetting, press the “Reset Wi-Fi, mobile & Bluetooth” option on the “Reset options” page.

      You can choose to erase downloaded SIMs as well.

      You will also see the networks associated with SIMs. If required, select “Erase downloaded SIMs” option to delete and reset SIMs networks.

      Once you are ready, press the “Reset settings,” and the phone will run the network resetting process.  

      Network Reset Options in Android

      Note: After doing network reset, you need to connect to new or old Wi-Fi using the password like first time setup. Therefore, make sure to keep your Wi-Fi password ready before resetting network settings.

      2. Reset App Preferences

      You might have changed lot of settings while using the phone. For example, you use Firefox browser to open webpages instead of the default Google Chrome. Android gathers all your custom settings over time and remembers that. At some point, you want to reset all app preferences and use the Android default app preferences.

      To do that, choose “Reset app preferences” on the “Reset options” page.

      A message box will appear on the screen, prompting that this action will reset preferences for disabled apps, disabled app notifications, default applications, background data restrictions for apps, and other permission restrictions.

      Press the “Reset apps” button to start the process. You will not lose any app or personal data.

      Reset App Preferences in Android

      3. Erase Downloaded SIMs

      This option is handy when you want to erase all data stored on the SIM.

      Select “Erase downloaded SIMs” on the “Reset options” main page.

      A dialog box will inform you that it will delete the SIM data, not the mobile service plans.

      Press the “Erase” button to delete the downloaded SIMs.

      It works on eSIM as well which is a digital SIM to utilize carrier services without having a physical SIM on your phone.

      You can contact your service provider to get a new SIM.

      Erase Downloaded SIMs in Android

      4. Erase All Data (Factory Reset)

      Factory reset will erase all data on your phone’s internal memory, including media files, apps, Google account, system, app settings and data, and any other data. Therefore, make sure to make a full backup of your phone to avoid losing any valuable data before running a factory reset. Learn more on how to take a complete backup in Android phones.

      Tap on the “Erase all data (factory reset) items which is the last option on the “Reset options” page.

      You will see a list of currently signed in apps at the bottom of the factory reset page. Check the list and sign out of the apps if required.

      Additionally, you can select check the “Erase downloaded SIMs” option to delete all the data on your SIMs.

      Once you are sure about it, touch the “Erase all data” button at the bottom to start the resetting process.

      This process will take time, depending on the data available on your phone and its model.

      Factory Reset Option in Android

      We strongly recommend to fully charge your phone before starting the factory reset process. You can also plugin to power input for peace of mind because any interruption during the factory resetting process may fail the Android loading. If that happens, you need to reinstall Android on your phone which will make everything complicated.

      Note: Factory reset will not erase any data stored on the external memory of your phone like SD card. Therefore, make sure to manually erase the data on your external memory storage.

      Using Shredder Apps

      Factory resetting will permanently delete all the data without recovery option, thanks to latest Android’s encryption feature. However, there are some recovery apps which can get the data back even after deletion. It is especially of concern for the SD cards or if you are using an older version of Android. When you decided to delete the data completely, consider using a file shredder app for an extra security layer. These file shredder apps run a unique algorithm to shred all the deleted data on your phone’s storage, making it irrecoverable. Below are some of the popular file shredder apps which you can try for this purpose.

      Secure Erase iShredder

      Shreddit – Data Eraser

      Data Eraser cb

      Andro Shredder

      Andro Shredder

      Final Words

      Resetting Android phone is the last option you should consider while troubleshooting to fix any issues. You can choose the required resetting option like network reset instead of doing factory reset. However, factory reset is the solution before selling the phone or for deleting the complete data.

      Top 10 Deep Neural Network Companies To Lookout For In 2023

      A deep neural network (DNN) is an artificial neural network (ANN) with multiple layers between the input and output layers.

      A deep neural network represents the type of machine learning when the system uses many layers of nodes to derive high-level functions from input information. It means transforming the data into a more creative and abstract component. The DNN does not only work according to the algorithm but also can predict a solution for a task and make conclusions using its previous experience. DNN processes data in complex ways by employing sophisticated math modelling. The evolution of DNN is First, machine learning had to get developed. ML model is a single model that makes predictions with some accuracy. So, the learning portion of creating models spawned the development of artificial neural networks. DNN is capitalizing on the ANN component. DNN allows a model’s performance to increase accuracy. Later VDNN makes exit. This article features the top 10 deep neural network companies to look out for in 2023.

      Google

      If we are talking about the top 10 deep neural network companies to look out for in 2023, we cannot move forward without mentioning Google. Mountain View has launched twelve new artificial intelligence firms in the last four years. The most significant transaction was a US$400 million deal for DeepMind, the Go champion on the board game. There’s also the ongoing Tensor AI chip project for machine learning on the device, as well as Google’s machine learning framework TensorFlow, which is now free for everyone.

      IBM

      IBM is best known for producing and selling computer hardware and software, as well as cloud computing and data analytics. The company has also served as a major research and development corporation over the years, with significant inventions like the floppy disk, the hard disk drive, and the UPC barcode. IBM has developed Watson, a machine learning platform that can be used to integrate AI into business processes such as creating a customer care chatbot.

      Intel

      Intel has also been on a buying spree with artificial intelligence startups, acquiring Nervana and Movidius, as well as a number of smaller AI start-ups. Intel® oneAPI Deep Neural Network Library (oneDNN) is an open-source performance library for deep learning applications. The library includes basic building blocks for neural networks optimized for Intel Architecture Processors and Intel Processor Graphics.

      Microsoft

      Microsoft is interested in both consumer and corporate Artificial Intelligence. Cortana, Microsoft’s artificial intelligence digital assistant, competes directly with Alexa, Siri, and Google Assistant. The company’s Azure Cloud service, which delivers chatbots and machine learning capabilities to some of the largest brands in the business, includes a lot of AI features. Microsoft is one of the best deep neural network companies to look out for in 2023.

      Qualcomm

      Qualcomm is a chipmaker dedicated to artificial intelligence. The Snapdragon 855 smartphone platform relies heavily on artificial intelligence. For AI voice, audio, and picture tasks, the chip uses a signal processor. Qualcomm Snapdragon processors are found in some of the most popular smartphones.

      Neurala

      Neurala, founded in 2006, is a visual AI software pioneer. Neurala helps industrial companies improve their quality inspection process with technology that dramatically reduces the time, cost, and skills required to build and maintain production-quality custom vision AI solutions, as part of its mission to make AI more applicable and useful in real-world applications. It is one of the best deep neural network companies to look out for in 2023.

      OpenAI

      Although enormous quantities of money have been obtained through investments and some through acquisitions, the non-profit research group focuses on the creation of AI for the benefit of all humanity and has managed to keep its open-source mentality. OpenAI employs some of the most well-known individuals in AI, including deep learning expert Ilya Sutskever. Microsoft, Amazon, and Elon Musk are among its sponsors.

      Starmind

      Starmind gives consumers instant access to undocumented human knowledge, allowing them to tap into networks’ collective human intelligence in real-time and with the necessary expertise. The collective powers of mankind, according to Starmind, are higher than any computational power accessible. Starmind enables connections to people, skills, knowledge, and solutions inside organisations, communities, and the globe at large using self-learning algorithms that mimic the brain. It is one of the best deep neural network companies to look out for in 2023.

      Clarifai

      For modelling unstructured image, video, text, and audio data, Clarifai delivers a leading computer vision, natural language processing, and deep learning AI lifecycle platform. It uses object classification, detection, tracking, geolocation, facial recognition, visual search, and natural language processing to help both public sector and enterprise customers address complicated use cases. On-premise, cloud and bare-metal installations are all available from Clarifai.

      NeuralWare

      Since 1987, NeuralWare has provided time-tested and field-proven technology platforms for creating and deploying neural network-based empirical modelling solutions. NeuralWare can assist in any type of problem, including prediction, classification, and pattern recognition. NeuralWare provides end-to-end solutions that may be used in embedded systems, desktop apps, enterprise frameworks, and cloud-based servers. It is one of the best deep neural network companies to look out for in 2023.

      More Trending Stories 

      How To Edit Photos In Lightroom – The Complete Guide For Beginners

      How To Edit Photos In Lightroom – The Complete Guide

      Learning how to edit photos in Lightroom is a big part of many photographer’s journeys. With all the things we see online, it doesn’t seem all that hard either! Just a few buttons here, a couple sliders there, and you’re photo looks more professional than ever.

      However, after you’ve tried your hand at editing photos for yourself, you’ve likely come to one conclusion. Editing photos in Lightroom is a lot harder than you anticipated. Not to worry though, we’ve all started in the same position you are. All it takes to learn how to edit photos in Lightroom is an understanding of the tools available to you!

      Photo editing shouldn’t feel like a chore. This guide will teach you the ins and outs of how to edit photos in Lightroom with an actionable and easy to follow method. Here you’ll discover the most important tools and the overall layout of the program. After reading this, you’ll be well on your way to editing photos more professionally in no time.

      Why Should You Edit Photos In Lightroom

      Lightroom is one of the most straight forward and accessible photo editing programs out there. Whether you’ve never edited a photo in your life or you have a general idea of what’s going on, Lightroom is so intuitive. There aren’t any fancy shortcuts or hidden tools that are difficult to remember. For the most part, what you see is what you get, and that’s a very admirable aspect of Adobe Lightroom.

      A lot of the tools found in Lightroom can also be found in other programs. This makes editing photos in Lightroom an excellent starting ground for any beginner. As you begin to build your knowledge and confidence with Lightroom, other photo editing programs will feel more natural to you.

      Learning how to edit your photos is one of the biggest factors of improving your photography. Taking the picture is only half the battle. If you really want to have your image stand out from the crowd, photo editing is a must. Lightroom is an extremely powerful tool to help you do just that. It gives you the ability to improve your photography without the challenges that come with learning other editing programs. Once you learn how to edit photos in Lightroom, you’ll quickly notice an improvement in your images!

      The Importance Of Creating A Workflow When Editing

      Although learning the different tools of Lightroom is a huge part, creating a workflow is crucial to speed up the editing process. A workflow is a series of steps that you follow to get from point A to B in your image. This doesn’t mean you always make the same adjustments, but it means you use the same type of adjustments for each step. For example, adjusting your exposure, color, and then creating spot adjustments would be considered a basic workflow.

      Creating a workflow will help you to streamline the editing process and reduce the amount of time you stare blankly at the screen. Rather than hmming over what you should do next, you know exactly what series of steps to follow.

      The way I outline this article is written with the intention of helping you build a workflow. Although I will be sharing info in individual tools in Lightroom, each section will offer the order of steps to follow in your edit. By putting to use the workflow outline shared here, you’ll have a clear path to success when photo editing in Lightroom.

      How To Edit Photos In Lightroom (Like A Pro!)

      To help you learn how to edit your photos in Lightroom, I’ll be breaking down all the essential tools. Each section will provide a new step in your Lightroom workflow, where I’ll break down the best tools the purpose they serve.  There is a lot to talk about here, so let’s dive in!

      Step 1: Import Your Images How To Importing Photos Into Lightroom

      A new window will open, asking you to locate your files. Choose the folder where your images are saved and select it.

      All the images that are checked off will be imported into Lightroom. If there are certain images you don’t want to import, you can uncheck them now.

      When you import your photos into Lightroom, you have a few options of how your files are managed. You can choose between Copy as DNG, Copy, Move or Add. If you’re unsure, Add is typically the best (and easiest) option.

      Here’s what each of these options entails:

      Copy As DNG: Copies files and moves them to a different folder. Converts all files into DNG files and imports to Lightroom. DNG files can be useful if your camera’s RAW file is not recognized or supported on a certain platform.

      Copy: Copies all files from their original location and adds them to a new folder before importing to Lightroom. This is useful if you want to create a backup of your photos without doing in manually.

      Move: Moves all files from their original location to a new location before importing to Lightroom. You could use this to import your files from your memory card and move them into a folder on your computer. A more streamlined way to add your photos!

      Add: Add your photos from their existing location and import them into Lightroom. Make sure you have saved your files in a designated folder beforehand. You don’t want to add your files directly from your memory card. Ensure it’s from a specified folder on your computer.

      Once you’re happy with the import method and your files are selected, press import to add your files into Lightroom. The import button is located on the bottom right of your Lightroom window.

      Where To Access Your Files After Importing To Lightroom

      After import, you will find your files in the specified folder within your Lightroom Catalog. As long as you use the same catalog, all of your imported files will appear within this window. That’s why it’s useful to label your folders with either dates or names to help you remember what’s in each of them!

      After you help Lightroom relink your files, you’ll be able to access them once again without issue!

      Ways To View Your Images In The Lightroom Library

      With all of your images imported into Lightroom, you have two different view options in your Library. You can view all your pictures in Grid View or Loupe View.

      Grid View: Shows all of your images in a tiled pattern. This is easy to see all your photos at once and quickly cycle through them. If you’re looking to find a certain section of images, this is the best view to do it.

      Loupe View: Shows a single image in your Lightroom window. This is best for selecting images since you can see the photo up close. In Loupe View, you aren’t editing your image, but just getting a closer look at it in your library.

      You can toggle between these two views in your Lightroom toolbar located beneath your photo. If you don’t have your toolbar visible, press T to bring it into view. The toolbar is handy to switch views and help you to select the best images from the bunch.

      What Is A Catalog In Lightroom?

      A Lightroom Catalog is a collection of images within Lightroom. Imagine it like a virtual photo book. The more pictures you add to your Lightroom Catalog, the bigger your photo book becomes. You can add all your images into one catalog or create multiple catalogs for different purposes. For example, you could create a catalog for all your vacation photos, a catalog for a certain client, and another catalog for your personal portfolio images. Lightroom Catalogs make it easier to separate and sort all of your photos into related groups.

      How To Organize and Cull Photos In Lightroom

      Chances are, you aren’t going to edit every photo you import. Some images will be throwaways, and others will be your next greatest photo. To make the editing process more streamlined, it’s crucial to sift through your photos and select the ones you want to edit. This organization process is also known as culling pictures and is an essential step in learning how to edit photos in Lightroom.

      There are two primary methods to organize your photos. The first method is flagging, and the second method is starring. Either method works well for organizing your images, so it’s a truly personal choice.

      Regardless of which method you use, the way you sift through the photos remains the same. Go into Loupe View and use your left and right arrow keys to move between images in your filmstrip. As you go between photos, select the ones you want to edit to speed boost your efficiency when you start editing.

      You can see which photo is being looked at based on the highlight in your filmstrip. The Lightroom filmstrip shows a lineup of all of your images within the selected folder.

      – The Flagging Method – The Star Method

      The second method to cull your photos is using a star rating. By pressing the numbers 1 – 5 on your keyboard, you will add a star rating to the image. This can be useful to segment your selects into different rating values.

      For example, I use the 1 star as a basic select, 2 stars if the photo needs retouching, 3 stars as a finished edit, 4 stars to mark revisions, and 5 stars to mark completed revisions.

      As you learn how to edit photos in Lightroom, you’ll create your own culling method that best suits your needs. The star method is excellent to add different levels to the selections made in Lightroom.

      Viewing Culled & Picked Images In Lightroom

      After you’ve gone through all the images and have made your selects, it’s easiest to filter your filmstrip to selects only. This can be easily done by adjusting the filter option on your filmstrip.

      If you used the flagging method, select flagged.

      If you used the star method, select rated.

      Now your images will be filtered to only show you the pictures you want to edit. This significantly helps to streamline your workflow by only showing you the photos you care about.

      As tedious and boring as organizing can be, it’s a crucial part of learning how to edit photos in Lightroom. Spend the time to organize your photos now so you can significantly speed up your workflow later. Your future self will thank you after you’ve neatly organized all your images!

      Step 2: Primary Image Adjustments

      After you have imported and culled through the photos you want to edit, it’s time to make some basic adjustments. These primary adjustments are the first steps you should take in editing your pictures. Although each of these steps is not always necessary, they’re worth following when learning how to edit photos in Lightroom.

      The first steps you should take when photo editing should be:

      Cropping or Straightening 

      Lens Corrections

      Removing Chromatic Aberration 

      Let’s break down each of these steps and how to use them in Lightroom. To access all of these adjustments, be sure to switch to your Develop Tab.

      How To Crop Photos In Lightroom

      Cropping is something that not every photo will need, so weigh whether or not this step is necessary. It can be a useful step to fix the composition of your image or get rid of distractions. Keep in mind that you want to avoid cropping your photo too much; otherwise, you’ll end up having a lower resolution.

      To crop a photo, simply select the crop tool at the top of your settings window.

      How To Straighten Photos In Lightroom

      If your camera wasn’t level, horizon lines can look skewed and make your image look less professional. To straighten out your photo or fix slanted horizon lines, you can use the Straightening Tool in Lightroom.

      In some cases, you don’t have a clear horizon to align to, so you’ll have to use your best judgment. In these situations, you can use the angle slider to straighten out your image. Simply move the slider left or right to adjust the orientation of your frame.

      How To Add Lens Corrections And Remove Chromatic Abberation In Lightroom – Lens Corrections

      Lens corrections, also known as profile corrections, help to counter any distortion caused by your lens. On wide-angle lenses, you may have a bubbled look near the edges of your frame. By using lens corrections, you can help to mitigate the effects of this to create a more ‘true to your eye’ image.

      Within your adjustments bar, scroll down until you see a tab reading ‘Lens Corrections’. From here simply check the box reading ‘Enable Profile Corrections’ to fix any distortion and vignetting.

      More often than not, Lightroom will automatically select the required camera info based on the image metadata. If this doesn’t happen, you’ll just need to manually choose your camera and lens within the profile tabs.

      – Removing Chromatic Aberration

      Chromatic aberration is something that can occur around the edges of things in your image. It appears as a colored fringe caused when a certain color wave isn’t focused correctly to the same focal plane. This can happen with any camera so it’s nothing to worry about, but you will want to get rid of it!

      To remove chromatic aberration in Lightroom, all you need to do it check the ‘Remove Chromatic Aberration’ box.

      This often will get the job done, but you can make manual adjustments if needed. If you find there is still significant color fringing in your photo, adjust the sliders under the manual tab to better target a specific color range of chromatic aberration.

      Step 3: Editing Your Photos In Lightroom

      Now for the exciting stuff! The hardest part of learning how to edit photos in Lightroom is to know which tools to use first. To help you build a workflow, here are the series of steps you should follow when editing an image:

      Exposure Adjustments

      Color Adjustments

      Spot Adjustments

      There are a ton of different tools that you can use in each phase of your edit. It can be hard to choose which tool is best for different situations, so it’s essential to learn how they all work.

      In this section, I’ll share the uses of each tool and how they affect your photo editing in Lightroom. This way, you’ll have the know-how to find the right tool for any adjustment!

      – The Basics Panel

      The Basics Panel is home to all your base exposure and color temperature adjustments. These sliders are simple to use and will make the biggest impact on the exposure of your image. Here’s a list of each tool in this panel and their uses when editing a photo in Lightroom.

      1. White Balance And Tint: White balance and tint are the single best way to correct any colors in your images. You may notice your unedited photo appears too purple or blue, creating an unflattering look to the picture. These sliders will correct any imbalance of color and help make your image look more true to real life.

      Learn more about the importance of white balance in photography.

      2. Exposure Slider: The exposure controls the overall brightness or darkness of a photo. Use this slider to adjust your image brightness globally.

      3. Contrast Slider: Contrast affects the levels of your whites and blacks. As you increase your contrast, you’ll make your whites more white and your blacks more black. This contrast slider adds a global contrast boost to your image

      4. Highlights & Shadows Sliders: These two sliders will affect the brightness of your highlights or shadows. The highlights are the brightest parts of your photo, while the shadows are the darkest. Adjusting these sliders is an easy way to level out your exposure and bring back details in your image.

      5. Whites & Blacks: Similar to the highlight and shadow sliders, these two sliders will affect the light and dark areas of your photo. However, rather than affecting the brightest and darkest areas, these sliders affect the exposures closer to middle grey. In short, the whites and blacks sliders help to adjust the middle exposure values of your photo. The places that aren’t crazy dark or crazy bright, but somewhere in between.

      6. Texture Slider: This slider adds more contrast to any edges in your photo. This is a great tool to make things pop and look a bit sharper. However, be careful not to overdo it, or you could risk creating an unrealistic looking photo!

      7. Clarity Slider: This slider will add more luminance to certain colors and give an enhanced contrast feel. This can add more drama to your photos and is a great adjustment to use on clouds! Just like the texture slider, be conservative with this tool.

      8. Vibrance: The vibrance slider will adjust the strength of your colors. If you want the colors in your photo to look deeper and richer, the vibrance tool will make it happen.

      9: Saturation: Similar to the vibrance slider, the saturation slider also boost the intensity of your colors. Instead of boosting all the colors in your photo, the saturation seems to enhance the richness of the brightest colors.

      – The Tone Curve

      The Tone Curve adjustment is one of the most versatile tools in Lightroom. It can add both exposure and color adjustments making it useful in a variety of situations.

      The Tone Curve is broken up into four main columns. Going from left to right they represent the shadows, darks, whites, and highlights of your photo. Depending on which section of the curve you adjust, you’ll target different exposure values in the image.

      The Region Curve breaks down each quadrant of the tone curve into four sliders. This makes it easy to adjust your curve without any anchor points. This is more streamlined and much more beginner-friendly. This curve is excellent if you only want to make exposure adjustments. It’s not possible to edit color with this version of the Tone Curve. The Region Curve also limits the amount you can adjust the exposure values to prevent clipping.

      Editing Colour With The Tone Curve

      Learn More About How To Master The Tone Curve In Lightroom.

      – HSL Adjustments

      The HSL Adjustment in Lightroom is all about changing the look of your colors. HSL stands for Hue Saturation and Luminance. Together, these three options can completely transform the look of your photos.

      The HSL Adjustment splits up all the color values in your image into 8 different channels. Depending on your image, certain channels or adjustment tabs will have more effectiveness in your photo than others. It’s useful to go through each tab individually to see the effects they offer!

      Hue: Changing the color hue will change how a particular range of colors appear. For example, you could alter the hue of yellow to make it appear more orange. You could change the hue of blue to make it more purple. The hue sliders are a super useful tool to apply creative looks to your photo!

      Saturation: Similar to the saturation slider found within the Basics Panel, the saturation sliders of HSL alter the richness of colors. Since each color is broken up into a certain range, you can easily target different values to saturate or desaturate color. Another powerful and simple way to enhance your photo in Lightroom.

      Luminance: Luminance is the lightness of a color. As you increase the luminance, you will essentially alter the exposure of a color. The Luminance Sliders are an easy way to lighten or darken sections of your photo based on color!

      – Split Toning

      As you learn how to edit photos in Lightroom, Split Toning is excellent for easily adding stylized color effects. Rather than adjusting the colors that are already in your photo, you can add color to your image. This adjustment lets you add a specific color to both your highlights and shadows for total color control!

      Split Toning in Lightroom is broken down into highlights and shadows. Each exposure value has its own sliders to adjust the hue or saturation.

      The Hue Slider will pick the color tone to be applied to your photo.

      The Saturation Slider will dictate how visible that color tone is in the photo.

      In between the highlight and shadow adjustments is the Balance Slider. This slider allows you to choose the dominance one color hue has over the other. For example, you could set the balance to favor the shadows so that hue becomes more dominant. It can take a bit of playing around, but the Balance Slider can really refine the split toning adjustments!

      – The Detail Panel

      The Detail Panel in Lightroom is home to all your sharpening and noise reduction needs. It might look a bit overwhelming at first, but there are only two main sliders you really should worry about.

      The two most important sliders in the Detail Panel are the Sharpening and Luminance sliders.

      Sharpening Amount Slider: This slider will set how much sharpness is applied to your photo. Sharpness is an essential part of your photo edit in Lightroom since it makes your image look more clear. It boosts the amount of contrast and grain around edges in your frame to create more perceived clearness. Be sparing with this adjustment so you don’t add too much grain!

      Luminance Noise Reduction: This slider will smooth out any noise present in your photo. It works by leveling out any areas of distortion in your image. This can work great to get rid of noise, but it also can make your image look fake and plastic. I tend to never go higher than 10 on this slider.

      – Calibration

      The Calibration Tab is a lesser-known and slightly more mysterious tool to many. When people are learning how to edit photos in Lightroom, they tend to skip over this tool altogether. However, I think there is a ton of value the calibration adjustments can add to the colors of your photos.

      Unlike other color adjustments we have talked about so far, the Calibration Tool adjusts the hue and saturation of your actual color channels. To put things simply, your photo is broken down into three main colors. Those colors being Red, Green, and Blue, otherwise known as RGB.

      Together, these colors make up all other colors in your image. So when you adjust the entire channel, you end up with some really neat effects. With very subtle adjustments to the Calibration Sliders, you can totally transform the mood of your pictures.

      These sliders are a really great tool to experiment with after you’ve made the bulk of your adjustments. It can really act as the cherry on top of your edit in Lightroom!

      Step 4: Spot Adjustments

      All the editing tools we have talked about so far adjust your image globally. This is great for an overall look, but what about when you want to target a certain area. Luckily there are a few easy to use spot adjustments in Lightroom that will do just the trick!

      All of these adjustment tools can be access at the top of your settings bar. Once you select an adjustment, a new Basic Panel will appear. The adjustments you make in this panel will only affect the areas you have selected.

      – Adjustment Brush

      When you select an area with the Adjustment Brush, it will be shown as a little circle over your image. When you hover over this circle, a red highlight will appear to show where your adjustments are affecting.

      The Adjustment Brush in Lightroom is automatically set to add to your selection. What about if you want to get rid of your adjustment area?

      To erase part of your selection, hold the ALT or OPTION key and paint over your image. This will get refine where your adjustments are targeting.

      Inside the brush you’ll find two circles. The gap between the outer and inner circle represents the brush feather. In basic terms, the feather will soften the edge of your brush. A larger feather will give a more natural fade around the edge of your selected area.

      You can locate the brush panel beneath all your settings. Here you can change the size, feather, flow, and density of your adjustment brush in Lightroom.

      – Gradient Filter

      The Gradient Filter creates localized adjustments that transition from 100% to 0% visibility. This is great to make subtle adjustments to the sky or edges of your photo.

      You can alter the feather of the gradient by moving the two lines further apart. The further away the lines of your gradient filter are, the softer and less noticeable the transition becomes.

      This tool is extremely straightforward and doesn’t have any hidden tricks or secrets to it. What you see is what you get when learning how to use the Gradient Tool in Lightroom.

      – Radial Filter

      The Radial Filter is similar to the Gradient Filter, except it creates a circular gradient. This is great to make adjustments in a specific spot in your photo. For example, you could put a radial gradient around your subject to add localized brightness. You could also use this filter to darken the edges of your frame for a more moody look.

      You can make adjustments both inside and outside with the radial filter. You can toggle between the two by checking the Invert box at the bottom of your settings panel. This will switch whether your adjustments are applied to the inside or outside of your radial gradient.

      For even further customizable to the Radial Filter, you can adjust the feather. The feather slider will decide how soft the edges of your gradient will be. You can experiment with this, but I find 50 to work well in most scenarios!

      – The Spot Removal Brush

      There will often be things in your images that you’ll need to remove. Whether that’s a sensor spot, a piece of garbage on the sidewalk, or something on your subject’s clothes. Whenever you’re editing a photo in Lightroom and need to get rid of something, the Spot Removal Brush will be your hero!

      The Spot Removal Brush works a lot like the Adjustment Brush in Lightroom. The difference being, the Spot Removal Brush removes anything you paint over.

      When you paint over something with this tool, it tries to find another similar area of your photo to replace it with. It’s not actually getting rid of something, but just hiding it behind another sampled part of your image.

      There are 3 main adjustments you have access to with the Spot Removal Brush:

      Size

      Feather

      Opacity

      The size will alter the size of the brush you use to paint over something in your photo.

      The feather will dictate how soft the edges of your replaced area look.

      While the opacity chooses how visible the sampled area appears over the spot you want to remove.

      When you paint over something with the Spot Removal Brush in Lightroom you’ll see a white line. This white line represents the sampled area that will be removed. From here, Lightroom automatically selects a similar part of your image to replace it with.

      Sometimes the area it automatically chooses to sample from doesn’t quite work as well as you’d hope. In that case, you can manually move around the sampling area to dictate the final result.

      If you’re removing something against a simple background like the sky, Lightroom will have an easy time to find a new area to sample from. However, it will struggle when you try to get rid of something with a complicated background or specific lighting.

      So yes, the Spot Healing Brush Tool will remove things from your photo, but it’s not always perfect. Regardless, it’s still an incredibly useful tool to use as you learn how to edit photos in Lightroom.

      Step 5: Exporting Photos From Lightroom

      After you’ve finished editing your photo in Lightroom, it’s time to share it with the world! That means it’s time to export.

      Luckily learning how to export images from Lightroom is a breeze. There are just a few crucial steps to keep aware of. The most important things to consider when exporting images from Lightroom are:

      Metadata

      File Type

      Export Location

      Let’s break down each of these steps individually.

      – How To Edit Your Image Metadata In Lightroom

      Metadata is the information behind your photo. In an image’s metadata you can find copyright information, camera settings, and even location info. Particularly when you’re sharing your photos online, it’s important to add copyright info to your image. That way you can always claim it as rightfully yours!

      Below is a super in-depth tutorial about adding metadata to Lightroom images by Aaron Nace. Even if you’re a total beginner, it’s essential to stay on top of your metadata!

      – Step By Step Guide To Exporting Images From Lightroom

      A new dialogue box will appear with all of the export information for your image. Let’s go through each crucial export section and discover what they’re used for.

      3. File Settings: If you need to change the type of file you’re exporting, you can do that in this File Settings tab. Here you can set your file type to a series of different options. If you have no idea what these file types are, just export your image to JPEG for the most versatility. Just make sure you set your quality to 100 if you want to best resolution possible with your export!

      Conclusion

      So that was everything you need to know about how to edit photos in Lightroom like a pro! The tips discussed in this guide will help you to edit photos more efficiently and with added confidence in Lightroom. No matter how you go about it, learning how to edit pictures takes time and practice. Don’t let yourself get discouraged and keep experimenting with new techniques!

      If you enjoyed this article, be sure to share it with a friend who’s wanting to learn how to edit photos in Lightroom! By sharing this guide, you help to support my blog and the creation of more articles like this one. I appreciate you!

      If you want to stay up to date with more articles like this one, be sure to subscribe to my weekly newsletter for more great tips and tutorials!

      Update the detailed information about Beginners Guide To Artificial Neural Network on the Cattuongwedding.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!