# πΈοΈCommon Applications

## π€ΈββοΈ Solving Approach

### π€³ One Shot Learning

Learning from one example (that we have in the database) to recognize the person again

### π The Process

• Get input image

• Check if it belongs to the faces you have in the DB

### π How to Check?

We have to calculate the similarity between the input image and the image in the database, so:

• β­ Use some function that

• similarity(img_in, img_db) = some_val

• π·ββοΈ Specifiy a threshold value

• π΅οΈββοΈ Check the threshold and specify the output

### π€ What can the similarity function be?

#### π· Siamese Network

A CNN which is used in face verification context, it recievs two images as input, after applying convolutions it calculates a feature vector from each image and, calculates the difference between them and then gives outputs decision.

In other words: it encodes the given images

#### π Visualization

Architecture:

We can train the network by taking an anchor (basic) image A and comparing it with both a positive sample P and a negative sample N. So that:

• π§ The dissimilarity between the anchor image and positive image must low

• π§ The dissimilarity between the anchor image and the negative image must be high

So:

Another variable called margin, which is a hyperparameter is added to the loss equation. Margin defines how far away the dissimilarities should be, i.e if margin = 0.2 and d(a,p) = 0.5 then d(a,n) should at least be equal to 0.7. Margin helps us distinguish the two images better π€ΈββοΈ

Therefore, by using this loss function we:

• π©βπ§ We update the weights and biases of the Siamese network.

For training the network, we:

• π©βπ« Take an anchor image and randomly sample positive and negative images and compute its loss function

## π  Neural Style Transfer

Generating an image G by giving a content image C and a style image S

### π Visualization

So to generate G, our NN has to learn features from S and apply suitable filters on C

Usually we optimize the parameters -weights and biases- of the NN to get the wanted performance, here in Neural Style Transfer we start from a blank image composed of random pixel values, and we optimize a cost function by changing the pixel values of the image π§

In other words, we:

• π©βπ« Define some cost function J

• π©βπ§ Iteratively modify each pixel so as to minimize our cost function

Long story short: While training NNs we update our weights and biases, but in style transfer, we keep the weights and biases constant, and instead update our image itself π

#### β Cost Function

We can define J as

Which:

• Ξ± and Ξ² hyperparameters

Last updated