Trending December 2023 # Learn The Basic Concepts Of Security Engineering # Suggested January 2024 # Top 15 Popular

You are reading the article Learn The Basic Concepts Of Security Engineering updated in December 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Learn The Basic Concepts Of Security Engineering

Introduction to Security engineering

Security Engineering focuses on the security aspects in the development of the systems so that they can deal robustly with losses caused by accidents ranging from natural disasters to malicious attacks. The main motto of security Engineering is to not only satisfy pre-defined functional and user requirements but also preventing the misuse of the system and malicious behavior. Security is one of the quality factors of a system that signifies the ability of the system to protect itself from accidental and malicious external attacks. It is an important issue as networking of the system has increased, and external attacks to the system through the internet can be possible. Security factor makes the system available, safe, and reliable. If a system is a networked System, then the reliability and its safety factors become more unreliable.

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

Why do we need security Engineering? Security risk management

Vulnerability avoidance: The system is designed so that vulnerabilities do not occur. Say if there is no network, then the external attack is not possible.

Detection and removal of attacks: The System is designed so that attacks can be detected and removed before they result in any exposure of data programs s same as the virus checkers who detect and remove the viruses before they infect the system.

Damage caused due to insecurity.

Corruption of programs and data: The programs or data in the system may be modified by unauthorized users.

Unavailability bod service: The system is affected and out into a state where normal services are not available.

Leakage of confidential information: Information that is controlled by the system may be disclosed to the people who are not authorized to read or use that information.

System survivability

System survivability is nothing but an ability of a system to continue performing difficult functions on time even if a few portions of the system are infected by malicious attacks or accidents. System survivability includes elements such an s reliability, dependability, fault tolerance, verification, testing, and information system security. Let’s discuss some of these elements.

Adaptability: even if the system is attacked by a threat, the system should have the capability to adapt to the threat and continue providing service to the user. Also, the network performance should not be degraded by the end-user.

Availability: The degree to which software remains operable in the presence of system failures.

Time: Services should be provided to the user within the time expected by the user.

Connectivity: It is the degree to which a system performs when all nodes and links are available.

Correctness: It is the degree to which all Software functions are specified without any misunderstanding and misinterpretations.

Software dependence: The degree to which hardware does not depend upon the software environment.

Hardware dependence: The degree to which software does not depend upon hardware environments.

Fault tolerance: The degree to which the software will continue to work without a system failure that would cause damage to the user and the degree to which software includes recovery functions

Fairness: It is the ability of the network system to organize and route the information without any failure.

Interoperability: It is the degree to which software can be connected easily with other systems and operated.

Performance: It is concerned with the quality factors kike efficiency, integrity, reliability, and usability. Sub factors include speed and throughput.

Predictability: It is the degree to which a system can provide countermeasures to the system failures in the situation of threats.

Modifiability: It is the degree of effort required to make modifications to improve the efficiency of functions of the software.

Safety: It is the ability of the system to not cause any harm to the network system or personnel system.

Recoverability: It is the ability of the system to recover from an accident and provide normal service on time.

Verifiability: It is about the efforts required to verify the specified Software functions and corresponding performance.

Security: it is the degree to which the software can detect and prevent the information leak, loss of information, and malicious use, and then any type of destruction.

Testability: It is about the efforts required to test the software.

Reusability: It is the degree to which the software can be reused in other applications.

Restorability: It is the degree to which a system can restore its services on time.

Recommended Articles

This is a guide to Security engineering. Here we have discussed the basic concepts of security Engineering and its various terms used for system protection. You may also have a look at the following articles to learn more –

You're reading Learn The Basic Concepts Of Security Engineering

Project Management Triangle � � Definition Basic Concepts And Applications With Templates

Project management is a complex field. A manager has to oversee everything — from employees working on a project to the limited resources to the possibility of risks that might crop up at any time. To ensure the project is delivered as required, project managers follow several approaches.

One popular concept that has recently gained immense popularity is the project management triangle. As the name suggests, it covers the three core concepts—time, cost, and scope. Keep reading to get a closer look at the project management triangle, its uses, and how it can help managers deliver projects successfully.

What is Project Management Triangle?

The project management triangle consists of the three main elements of any project, namely time, budget, and scope. This triangle aims to establish a relationship between these elements, helping the manager to balance the triangle.

If there’s a change in one of the elements in this triangle, the manager has to adjust the other two elements to maintain the balance. If these variables aren’t adjusted, the triangle might break, ruining the quality of the project. Simply put, a project management triangle must be balanced to maintain quality while adhering to the deadline and the project’s specifications.

For example, if the project’s scope increases, the manager has to increase the deadline and cost to ensure all resources are available and there’s adequate time to complete the project successfully.

The Triple Constraints

The project triangle consists of three constraints. Let’s understand each in detail.

1. Scope

The scope of the project refers to its size. It specifies the goals of the project, based on which the manager can estimate the total cost of the project, the labor required to finish it, and by what time they can deliver it to the client without affecting the quality.

The scope is the most crucial component of any project, which is why it’s discussed at an early stage (before commencing a project). It’s obvious that any change in the scope of the project will affect the whole triangle. As mentioned above in the example, increasing the scope will result in more time and cost to complete it. As a result, the manager might have to add more cost, extend the delivery date, or do both to keep the triangle intact.

2. Cost

The cost constraint covers the expenses you will incur in completing the project. This may cover the cost of labor, machinery, raw materials, and all resources required to achieve the scope. Most clients are cost-sensitive. That’s the first thing they want to know before finalizing the project. A project manager must be able to give a proper estimate of the cost. If there’s any change in the scope or timeline, the cost must be adjusted accordingly.

The cost in the project management triangle isn’t just the dollar amount, but it covers your labor and all the resources you have invested in completing the project. This might include your team members, equipment, and facilities. It’s important to factor in everything that has a financial value when estimating cost. For instance, in a project that requires more employees than your in-house team, you need to add their salaries to the cost. Or, if your project requires keeping your workplace open for longer than normal duration, you have to estimate the electricity usage for extra hours.

3. Time

Time constraint refers to the expected delivery date. Your client will assign you a project with a deadline. This is one of the most crucial yet complicated variables of the project management triangle. Managing deadlines while keeping the cost and scope of the project aligned can get tricky. A sick employee, an unmanaged workforce, not getting the required resources on time, and a delay in getting materials can result in time constraints that may eventually make it difficult for you to finish a project within your desired timeframe.

Since it’s a vital part of a project management triangle, any problem with the time can increase the cost, as you might need more labor for a simple task, or it might affect the project scope. Besides, clients are often strict about deadlines. The inability to finish the project within the deadline, irrespective of the reason, can affect your relationship with the client. They might terminate the contract with you.

How to Manage the Three Constraints?

Each variable of the project management triangle must work together to ensure your project is completed successfully. Here are a few strategies for managing the three constraints well.

Set Your Priorities

The main purpose of the project management triangle is to balance the three variables, without which it’s impossible to complete the project. When developing a project triangle, you must decide which variable is flexible enough to be adjusted. For instance, if staying within the budget is your main priority, you can extend the deadline instead of hiring more labor. If your client has a strict deadline, it’s wiser to ask them to increase the budget so that you can employ more labor to finish everything within the assigned timeframe.

Update Frequently

Any change to the project triangle must be reported to the stakeholders immediately. Even if you are just pushing back the deadline for 1-2 days, you must inform the stakeholders. Likewise, your business associates, investors, and all parties involved in the project must stay up-to-date with your progress. Inform your project team of any delay, requirement for an additional budget, and other challenges. Keeping everyone informed about the project’s status and the triangle ensures that your goals are met, and the project is completed on time and within budget.


Before starting a project, the manager must make a project triangle that describes each constraint. You must involve stakeholders, clients, and other parties involved in the project to discuss these constraints and decide which one must be adjusted to what extent if the need arises. This will give you a better idea of where you can accommodate changes and which areas of the triangle are flexible enough to be adjusted.

The Feats Of Engineering That Dazzled Us In 2023

Looking for the complete list of 100 winners? Check it out here.

Steelmaking yields between seven and nine percent of the world’s carbon emissions, mostly due to a specially processed type of coal called “coke.” At temperatures as high as 3,000°F, coke reacts with oxygen in iron ore, purifying the metal into a form needed to make steel—but belching carbon dioxide in the process. To reduce the footprint, a Swedish industrial consortium developed Hybrit, a steel whose production taps hydrogen, rather than carbon, to transform iron ore. The hydrogen, freed from water, reacts with the oxygen in ore in a machine called a shaft furnace, heated to 1,500*F with fossil-free wind energy and hydropower. The scheme releases hydrogen and water, instead of carbon dioxide, and the resulting “sponge iron” melts in an electric arc furnace with a small amount of carbon to create steel. Hybrit says the process has carbon dioxide emissions less than 2 percent of those from the standard coke-fueled regimen. This past summer, Volvo took delivery of the first batch of this “green steel” and used it to make a mining and quarrying vehicle.

A cleaner way to ship

Container ships fuel our economy of cheap consumer goods, but create almost three percent of the world’s carbon dioxide emissions. Electric batteries don’t have the energy density to efficiently power the massive vessels—and plunking chargers in the middle of the ocean is pretty much impossible. This year, Finnish engine maker Wärtsilä teamed up with the Norwegian logistics giant Grieg to bet on carbon-free ammonia to propel future ships. Powered by a Norwegian wind farm, engineers will use electrolysis to create hydrogen gas that reacts with nitrogen in a factory to create ammonia.  Wärtsilä already completed an engine burning a mix of 70 percent ammonia, and is planning a pure ammonia version to deploy in a tanker in 2024.

Your downtown sustainable seafood farm

Global hunger for farmed shrimp has destroyed some 3.4 million acres of mangrove forests since 1980, mostly in Southeast Asia. Tearing apart those carbon-absorbing ecosystems gives the practice a footprint higher than dairy cattle, pigs, or chicken. Disease outbreaks and waterways choked with waste also plague the industry. The “Vertical Oceans” model takes the whole operation indoors. The shellfish live in modular school-bus sized tanks, and algae, seaweed, and bottom-feeding fish filter out waste. This way, nearly 100 percent of the water gets recirculated, and there is no need for a sewer. A prototype in Singapore delivered 10 harvests of shrimp this year, totaling more than a ton of crustaceans.

A bridge that spots its flaws The first sea-bound floating rollercoaster

Carnival Cruise Line


Normal roller coasters use gravity to send thrill-seekers zooming and looping. But if you want to build a ride on a cruise ship—where stable, level ground is far from guaranteed—you have to get creative. Carnival Cruise Line’s BOLT coaster uses electricity to power its wee motorcycle-esque cars along a long, looping track. Riders control the speed, up to 40 mph, and travel 187 feet above sea level. Using the motor for propulsion, rather than steep freefalls, prevents the experience from reaching unsafe speeds.

Batteries that could make dirty electricity obsolete

To maintain fully renewable grids, utilities need big, inexpensive batteries to meet peak demand when the wind isn’t blowing or the sun isn’t shining. But, the lithium-ion cells inside laptops and EVs are expensive. So Form Energy has pioneered a new and highly efficient battery chemistry based on one of the most abundant metals in the Earth: iron. The company’s “Big Jim” prototype discharges electrons by reacting ambient oxygen with iron, creating rust. Inbound electrical current turns the rust back into iron, releasing oxygen, and recharging the battery. Environmental engineers say a battery that runs at $20 per kilowatt-hour is the magic number for utilities to say goodbye to coal and natural gas—which is where Form Energy hopes to price Big Jim’s final product.

AI that predicts the 3D structure of proteins

Before this year, science knew the exact 3D shape of only 17 percent of the proteins in the human body—essential components of life responsible for everything from cell maintenance to waste regulation. Understanding how these chains of amino acids pretzel themselves into unique configurations has been something of a holy grail for 50 years. AlphaFold, a machine learning algorithm, has now cracked the structures of more than 98 percent of the 20,000 proteins in the human body—with 36 percent of its predictions accurate down to the atomic level. DeepMind has put its source code and database of predictions in the public domain, opening up new possibilities for those developing new medications, doctors trying to create inhibitors for pathogenic mutations, or designers developing new materials.

Using the sky as an air conditioner

Air conditioners and fans already consume 10 percent of the world’s electricity, and AC use is projected to triple by the year 2050, sucking up more energy and pushing heat back into the surrounding landscape. SkyCool is breaking this dangerous feedback loop with rooftop nanotech that reflects light. Coated with multiple layers of optical films, the aluminum-based panels bounce radiation at wavelengths between 8 and 13 micrometers, a specific spot that allows the waves to pass through Earth’s atmosphere and into space. In doing so, the panel temperatures decline by up to 15°F, offering emissions-free cooling to a building’s existing systems. A prototype installed last fall on a grocery store in Stockton, Calif., cooled water pipes beneath the panels to chill the store’s refrigeration system—saving an estimated $6,000 a year in electrical bills.

A pair of robotic hands for laying explosives A look into the eye of a hurricane

To understand how hurricanes intensify and better forecast future disasters, scientists need data about barometric pressure, air and water temperature, humidity, and wind conditions inside a raging storm. Powered by the sun and wind, the autonomous 23-foot Saildrone became the first-ever robotic vehicle to navigate into the eye of a hurricane this past September, when it entered the category 4 storm Hurricane Sam. With its instrument wing shortened to better endure extreme conditions, the Saildrone vessel offered first-of-its-kind footage and readings, all amid winds hitting 120 mph. Labs across the country are already putting this floating Swiss Army Knife, which offers data from the ocean’s surface missing from satellite imagery, to work: NASA to augment imperfect satellite readings and study climate change, and NOAA to survey the health of Alaskan pollock.   

Computer Engineering Vs Software Engineering

Difference Between Computer Engineering vs Software Engineering

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

Head to Head Comparison Between Computer Engineering vs Software Engineering (Infographics)

Key Difference Between Computer Engineering vs Software Engineering

Let us discuss some of the major key differences between Computer Engineering vs Software Engineering:

The one of the major differences between software engineering and computer engineering is based on core educational studies. The educational studies of computer engineering include analysis of data, computing process and knowledge of software and hardware systems. The computer engineering helps to gain knowledge of data management process. Software engineering process deals with the software development process, stages of development process and how the performance can be enhanced for the software. Computer engineering helps to know the science for working of computer system while on other hand software engineering deals with mathematical principles to design software and hardware systems.

The other important difference between two branches is career paths. In the computer engineering branch, the individual has several career paths as he can opt for IT industry, website designing, game development, IT support and many more. On the other hand, in the software engineering branch the individual has very specific job roles as they can opt for designing of software systems.

The other common difference between two engineering is software and hardware interaction. In the computer engineering branch, the concepts of software and hardware interaction is included. The individual that deals with computer engineering need to get knowledge of how the software can be interact with the hardware system. On the other hand, in the software engineering branch, concepts of software are only included. The individual that deals with the software engineering branch need to take care of software creation process, maintenance and testing of software programs.

Another key difference between two engineering branches is design of software. The computer science individual can learn about algorithms and theories of how the program actually work and how the application can be developed using programming language. On the other hand, the software engineer can use the information of computer engineer and can develop specific software as per the business requirements.

The other comparison difference for the computer engineering and software engineering is computer coding. Computer coding is included in both type of engineering branch that helps to learn about the programming language and its concepts. In the software engineering branch, it is more focused on leaning coding and develop the software programs. On the other hand, computer engineering deals with learning of computer language and interaction mechanism of the software and hardware applications.

The other difference between tow engineering branch is scientific theories. The computer engineering branch deals with the scientific theories that can be applied to calculate computer operations, data system and the complete procedure to design the software programs. On the other hand, the software engineering branch deals with the scientific theories to design the frameworks, applications and software programs. Software engineering helps to design the real-scenario computer applications. Real – time engineering concepts are used in the software engineering branch to develop software programs. Computer engineering branch deals with the computer operations.

Computer Engineering vs Software Engineering Comparison Table

Factor Computer Engineering Software Engineering

Definition Computer science is type of branch that deals with the computer system and helps to gain the knowledge about the computer system and its process. It helps to understand about various computational process. Software engineering is type of branch that helps to analyse the user requirements and according to that software designing, developing and testing is done for the software. The software that is developed is completely based on the user requirements.

Meaning The general meaning of computer science is study of the computer system and how the performance of computer system can be enhanced. The general meaning of software engineering is study of the software system and gaining knowledge of complete procedure.

Selection Procedure The computer engineering branch can be selected when an individual has interest to know about artificial intelligence, security, machine learning or graphics design. The software engineering branch can be selected when an individual has interest to know the complete build procedure of the software. The individual can get complete information of the software making.

Project Management In the computer engineering branch, project management helps to get better understanding about computer system concepts. But mostly this course is included in software engineering branch. In the software engineering branch, project management course can be included to get proper knowledge about software development process.

Included Courses Computer engineering branch included several courses like knowledge of computing devices, data processing techniques and data managerial course. The software engineering branch deals with several courses that includes programming course, computing principles and other courses.

Scope The future scope of the computer engineering branch includes artificial intelligence technology, cloud computing, machine learning and many more. The future scope of the software engineering branch depends on the upcoming software technology that can be used for the development of the software.

Expected Salary The scope of computer engineering branch is bright and a person that study the computer engineering branch is called computer engineer. The salary of computer engineer is more than software engineer. The scope of software engineering branch is bright and a person that study the software engineering branch is called software engineer. The salary of software engineer is less than computer engineer.


The computer engineering branch and software engineering branch has its own importance and helps the individual to excel in their life. Both type of courses provided the ample opportunities for the individual to learn about the computer system and software programs and the complete architecture.

Recommended Articles

This is a guide to Computer Engineering vs Software Engineering. Here we discuss the key differences with infographics and comparison table. You may also have a look at the following articles to learn more –

The Security Caveats Of Nfc Payments

The idea of paying for something without using your PIN number isn’t something new anymore. Despite that, the concept exposes you to just as many vulnerabilities (if not more) than it did before.

Previously, I have written about Android Pay’s PIN-less mobile payment system and the negative consequences people can suffer by replacing their PIN numbers with biometric authentication. Now there are devices such as NFC payment rings that further exacerbate the previous vulnerability issues of other similar solutions. It turns out that there are a couple of things you should know before you hop into the bandwagon of convenience that contact-less payments provide.

People Can Listen in on Transactions

Hackers and researchers have been aware of NFC eavesdropping since at least 2013 when some folks crafted a shopping cart that could easily slip in and “listen” to transactions being made by contact-less payment. To prevent such a phenomenon from happening, readers need to encrypt their connections from end to end. Even then, the possibility of eavesdropping still exists. For consumers to be reliably safe, it’s better to avoid using NFC in crowded places.

The Data Can Be Invalidated

This particular problem annoys retailers just as much as shoppers. A hacker can place a device near the reader that corrupts the data going into the reader, making it impossible to make a purchase at that particular counter. Hackers might have an incentive to do this in conjunction with eavesdropping to make sure that the customer does not empty their balance before they have a chance to use it.

The solution to this problem is the same here as it is for eavesdropping. Retailers should use secure channels for transmitting and receiving data on their NFC readers. Although this particular attack doesn’t present a particular threat to either the retailer or the customer (just a lot of frustration), it’s worth repeating the fact that it can be especially dangerous to the customer when hackers choose to combine this with eavesdropping.

The “Man in The Middle” Attack

Described in better detail over here, a man in the middle (MiM) attack is a sophisticated form of eavesdropping in which the hacker will intercept the conversation between the NFC device and the reader processing the payment and send false information to both. This way hackers can invalidate data (sending the reader garbage information as I’ve described above) and receive the NFC payment themselves based on what the NFC device tried to send to the reader.

Because of their sophistication, such attacks are very rare, but the vulnerabilities currently present in NFC transactions create an incentive for hackers to start investing more time in making tools that will carry out these attacks. To make matters worse, hackers can actively listen in on the connection before the encryption “handshake” is complete, making encryption rather useless at this point. But one thing retailers could do is to have an active-passive style of communication where the NFC device simply sends over its data, and the reader simply processes the information and sends back purchase confirmation.

Never Underestimate Pickpocketers

Of course, when you’re not cut out for cleverly hacking your way into payment portals, your best option is to simply grab whatever people are using to pay for things these days. A card is a bit harder to steal since you’d normally have to steal the entire wallet which is sitting inside of a pocket most of the time (some people use their inside coat pocket for their wallets, making this more challenging).

But phones are often kept outside of pockets and easily get lost. Even if they are in a pocket, most people won’t treat their phones with such care as they do their wallets. NFC payment rings take this a little bit further since it is even easier to lose rings. Stealing them is only a matter of finding an opportune moment when someone takes off their rings to wash their hands.

My suggestion for people using phones is to make sure they have some way to remotely lock the device down if it’s lost. Other than that, you should be avoiding NFC payments entirely if it is very important for you to minimize the chances of your money being stolen in any of the nasty ways I’ve described above.

Miguel Leiva-Gomez

Miguel has been a business growth and technology expert for more than a decade and has written software for even longer. From his little castle in Romania, he presents cold and analytical perspectives to things that affect the tech world.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

Getting Started With The Basic Tasks Of Computer Vision

This article was published as a part of the Data Science Blogathon

If you are interested or planning to do anything which is related to images or videos, you should definitely consider using Computer Vision. Computer Vision (CV) is a branch of artificial intelligence (AI) that enables computers to extract meaningful information from images, videos, and other visual inputs and also take necessary actions. Examples can be self-driving cars, automatic traffic management, surveillance, image-based quality inspections, and the list goes on. 

What is OpenCV?

OpenCV is a library primarily aimed at computer vision. It has all the tools that you will need while working with Computer Vision (CV). The ‘Open’ stands for Open Source and ‘CV’ stands for Computer Vision.

What will I learn?

The article contains all you need to get started with computer vision using the OpenCV library. You will feel more confident and more efficient in Computer Vision. All the code and data are present here.

Reading and displaying the images

First let’s understand how to read the image and display it, which is the basics of CV.

Reading the Image:

import numpy as np import cv2 as cv import matplotlib.pyplot as plt img=cv2.imread('../input/images-for-computer-vision/tiger1.jpg')

The ‘img’ contains the image in the form of a numpy array. Let’s print its type and shape,

print(type(img)) print(img.shape)

The numpy array has a shape of (667, 1200, 3), where,

667 – Image height, 1200 – Image width, 3 – Number of channels,

Displaying the Image:

# Converting image from BGR to RGB for displaying img_convert=cv.cvtColor(img, cv.COLOR_BGR2RGB) plt.imshow(img_convert) Drawing over Image

We can draw lines, shapes, and text an image.

# Rectangle color=(240,150,240) # Color of the rectangle cv.rectangle(img, (100,100),(300,300),color,thickness=10, lineType=8) ## For filled rectangle, use thickness = -1 ## (100,100) are (x,y) coordinates for the top left point of the rectangle and (300, 300) are (x,y) coordinates for the bottom right point # Circle color=(150,260,50), (650,350),100, color,thickness=10) ## For filled circle, use thickness = -1 ## (250, 250) are (x,y) coordinates for the center of the circle and 100 is the radius # Text color=(50,200,100) font=cv.FONT_HERSHEY_SCRIPT_COMPLEX cv.putText(img, 'Save Tigers',(200,150), font, 5, color,thickness=5, lineType=20) # Converting BGR to RGB img_convert=cv.cvtColor(img, cv.COLOR_BGR2RGB) plt.imshow(img_convert)

Blending Images

We can also blend two or more images with OpenCV. An image is nothing but numbers, and you can add, subtract, multiply and divide numbers and thus images. One thing to note is that the size of the images should be the same.

# For plotting multiple images at once def myplot(images,titles): fig, axs=plt.subplots(1,len(images),sharey=True) fig.set_figwidth(15) for img,ax,title in zip(images,axs,titles): if img.shape[-1]==3: else: img=cv.cvtColor(img, cv.COLOR_GRAY2BGR) ax.imshow(img) ax.set_title(title) img1 = cv.imread('../input/images-for-computer-vision/tiger1.jpg') img2 = cv.imread('../input/images-for-computer-vision/horse.jpg') # Resizing the img1 img1_resize = cv.resize(img1, (img2.shape[1], img2.shape[0])) # Adding, Subtracting, Multiplying and Dividing Images img_add = cv.add(img1_resize, img2) img_subtract = cv.subtract(img1_resize, img2) img_multiply = cv.multiply(img1_resize, img2) img_divide = cv.divide(img1_resize, img2) # Blending Images img_blend = cv.addWeighted(img1_resize, 0.3, img2, 0.7, 0) ## 30% tiger and 70% horse myplot([img1_resize, img2], ['Tiger','Horse']) myplot([img_add, img_subtract, img_multiply, img_divide, img_blend], ['Addition', 'Subtraction', 'Multiplication', 'Division', 'Blending'])

The multiply image is almost white and the division image is black, this is because white means 255 and black means 0. When we multiply two-pixel values of the images, we get a higher number, so its color becomes white or close to white and opposite for the division image.

Image Transformation

Image transformation includes translating, rotating, scaling, shearing, and flipping an image.

img=cv.imread('../input/images-for-computer-vision/tiger1.jpg') width, height, _=img.shape # Translating img_translate=cv.warpAffine(img,M_translate,(height,width)) # Rotating center=(width/2,height/2) M_rotate=cv.getRotationMatrix2D(center, angle=90, scale=1) img_rotate=cv.warpAffine(img,M_rotate,(width,height)) # Scaling scale_percent = 50 width = int(img.shape[1] * scale_percent / 100) height = int(img.shape[0] * scale_percent / 100) dim = (width, height) img_scale = cv.resize(img, dim, interpolation = cv.INTER_AREA) # Flipping img_flip=cv.flip(img,1) # 0:Along horizontal axis, 1:Along verticle axis, -1: first along verticle then horizontal # Shearing srcTri = np.array( [[0, 0], [img.shape[1] - 1, 0], [0, img.shape[0] - 1]] ).astype(np.float32) dstTri = np.array( [[0, img.shape[1]*0.33], [img.shape[1]*0.85, img.shape[0]*0.25], [img.shape[1]*0.15, img.shape[0]*0.7]] ).astype(np.float32) warp_mat = cv.getAffineTransform(srcTri, dstTri) img_warp = cv.warpAffine(img, warp_mat, (height, width)) myplot([img, img_translate, img_rotate, img_scale, img_flip, img_warp], ['Original Image', 'Translated Image', 'Rotated Image', 'Scaled Image', 'Flipped Image', 'Sheared Image']) Image Preprocessing

Thresholding: In thresholding, the pixel values less than the threshold value become 0 (black), and pixel values greater than the threshold value become 255 (white).

I am taking the threshold to be 150, but you can choose any other number as well.

# For visualising the filters import plotly.graph_objects as go from plotly.subplots import make_subplots def plot_3d(img1, img2, titles): fig = make_subplots(rows=1, cols=2, specs=[[{'is_3d': True}, {'is_3d': True}]], subplot_titles=[titles[0], titles[1]], ) x, y=np.mgrid[0:img1.shape[0], 0:img1.shape[1]] fig.add_trace(go.Surface(x=x, y=y, z=img1[:,:,0]), row=1, col=1) fig.add_trace(go.Surface(x=x, y=y, z=img2[:,:,0]), row=1, col=2) fig.update_traces(contours_z=dict(show=True, usecolormap=True, highlightcolor="limegreen", project_z=True)) img=cv.imread('../input/images-for-computer-vision/simple_shapes.png') # Pixel value less than threshold becomes 0 and more than threshold becomes 255 _,img_threshold=cv.threshold(img,150,255,cv.THRESH_BINARY) plot_3d(img, img_threshold, ['Original Image', 'Threshold Image=150'])

After applying thresholding, the values which are 150 becomes equal to 255

Filtering: Image filtering is changing the appearance of an image by changing the values of the pixels. Each type of filter changes the pixel value based on the corresponding mathematical formula. I am not going into detail math here, but I will show how each filter work by visualizing them in 3D. If you are interested in the math behind the filters, you can check this.

img=cv.imread('../input/images-for-computer-vision/simple_shapes.png') # Gaussian Filter ksize=(11,11) # Both should be odd numbers img_guassian=cv.GaussianBlur(img, ksize,0) plot_3d(img, img_guassian, ['Original Image','Guassian Image']) # Median Filter ksize=11 img_medianblur=cv.medianBlur(img,ksize) plot_3d(img, img_medianblur, ['Original Image','Median blur']) # Bilateral Filter img_bilateralblur=cv.bilateralFilter(img,d=5, sigmaColor=50, sigmaSpace=5) myplot([img, img_bilateralblur],['Original Image', 'Bilateral blur Image']) plot_3d(img, img_bilateralblur, ['Original Image','Bilateral blur'])

Gaussian Filter: Blurring an image by removing the details and the noise. For more details, you can read this.

Bilateral Filter: Edge-preserving, and noise-reducing smoothing.

In simple words, the filters help to reduce or remove the noise which is a random variation of brightness or color, and this is called smoothing.

Feature Detection

Feature detection is a method for making local decisions at every image point by computing abstractions of image information. For example, for an image of a face, the features are eyes, nose, lips, ears, etc. and we try to identify these features.

Let’s first try to identify the edges of an image.

Edge Detection img=cv.imread('../input/images-for-computer-vision/simple_shapes.png') img_canny1=cv.Canny(img,50, 200) # Smoothing the img before feeding it to canny filter_img=cv.GaussianBlur(img, (7,7), 0) img_canny2=cv.Canny(filter_img,50, 200) myplot([img, img_canny1, img_canny2], ['Original Image', 'Canny Edge Detector(Without Smoothing)', 'Canny Edge Detector(With Smoothing)'])

Here we are using the Canny edge detector which is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986. I am not going in much details of how Canny works, but the key point here is that it is used to extract the edges. To know more about its working, you can check this.

Before detecting an edge using the Canny edge detection method, we smooth the image to remove the noise. As you can see from the image, that after smoothing we get clear edges.

Contours img=cv.imread('../input/images-for-computer-vision/simple_shapes.png') img_copy=img.copy() img_gray=cv.cvtColor(img,cv.COLOR_BGR2GRAY) _,img_binary=cv.threshold(img_gray,50,200,cv.THRESH_BINARY) #Edroing and Dilating for smooth contours img_binary_erode=cv.erode(img_binary,(10,10), iterations=5) img_binary_dilate=cv.dilate(img_binary,(10,10), iterations=5) contours,hierarchy=cv.findContours(img_binary,cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE) cv.drawContours(img, contours,-1,(0,0,255),3) # Draws the contours on the original image just like draw function myplot([img_copy, img], ['Original Image', 'Contours in the Image'])

Erode The erosion operation that uses a structuring element for probing and reducing the shapes contained in the image.

Dilation: Adds pixels to the boundaries of objects in an image, simply opposite of erosion

Hulls img=cv.imread('../input/images-for-computer-vision/simple_shapes.png',0) _,threshold=cv.threshold(img,50,255,cv.THRESH_BINARY) contours,hierarchy=cv.findContours(threshold,cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE) hulls=[cv.convexHull(c) for c in contours] img_hull=cv.drawContours(img, hulls,-1,(0,0,255),2) #Draws the contours on the original image just like draw function plt.imshow(img) Summary

We saw how to read and display the image, drawing shapes, text over an image, blending two images, transforming the image like rotating, scaling, translating, etc., filtering the images using Gaussian blur, Median blur, Bilateral blur, and detecting the features using Canny edge detection and finding contours in an image.

I tried to scratch the surface of the computer vision world. This field is evolving each day but the basics will remain the same, so if you try to understand the basic concepts, you will definitely excel in this field.

The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion.


Update the detailed information about Learn The Basic Concepts Of Security Engineering on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!