IOS picture accurate extraction of main tone algorithm iOS-Palette (Fu Yuanma)

Source code visible: [direct click] (if you like the article, we go in and point to a star support, thank you ~
)

1. background

Images that extract dominant colors to enhance immersive interactive scenes are becoming more common, such as web pages of personal homepages, and Instagram’s picture tone selection. How do you get the main colors of a picture? In Android.support.v7, Google gives a solution called Palette (palette). The effect is as follows:

IOS picture accurate extraction of main tone algorithm iOS-Palette (Fu Yuanma)
Google official renderings

As for this algorithm, I’ve given parsing in the previous article (click View). Although the algorithm is good, but it is written in Java, but also used a lot of Android or Java, a lot of tools library category. Our iOS party and Palette can be regarded as the longest distance between programmers: “your source code is there, never abandon, but I can not call API sigh.””.

But now, we, the iOS party, have their own Palette! Let us make a Objective-C API, while the 2017 reform of the WWDC East, take the broad road to happiness!

2. why use Palette?

Does anyone want to ask, not just to extract the main colors of the picture, I’ll go through all the pixel information of the picture, and then count which RGB is the most, not the main hue

Is it possible? Not really. In some simple scenes, the effect is even worse. But consider these scenes: a Taobao with a dark background, a night light, such as a neon light:

IOS picture accurate extraction of main tone algorithm iOS-Palette (Fu Yuanma)
ball lights in the dark

Rows of yellow, oil – filled ofo were placed on the grey earth:

IOS picture accurate extraction of main tone algorithm iOS-Palette (Fu Yuanma)
a row of ofo on the grey earth

Or when the main tone of the picture is not solid color, but in a gradual state, so that it will scatter the RGB value, there is a big error. So it is easy to see that the main colors of the picture are not simply the most frequently occurring RGB values. It should be in the habit of our eyes, the focal point of vision that we can recognize at a glance. And that’s what Palette really does.

3. so what did Palette do?

Palette has two major features, but also solved two major problems, one is to solve the problem of whether the extraction of color is the focus of vision, and the two is the color dispersion problem.

(1) how to solve the problem of color vision focus?

The RGB color pattern describes three color channels, and these three channels combine to become the color we can see in the end. It can represent a number of colors that are amazing enough to cover all the colors that the human eye can perceive. But it does not indicate how attractive colors are to the human eye. So let’s think back to the two pictures. Are we attracted by bright blue and yellow at once? Attention, I used the word “bright”.

So what is bright? The answer is the saturation of the color, that is, brilliance. And the right color brightness, that is, the brightness of the color. And enough color number, that is, the number of pixels represented by the color or color family.

On the whole, the higher the color saturation, the more brightly colored, the more attractive to the eye. Proper brightness also helps to increase color appeal, too low if the color is very dark, too high, the color approaching white, will let the human eye ignore. As for the number of colors, needless to say, the more sure, the better!

Students who have studied pictures and color patterns must be able to guess what I’m going to say. Yes, the HSL color model is used to evaluate the prompts. Saturation is the S (saturation) in HSL, and the lightness is the L (lightness) in HSL. Palette’s solution is to use color S and L values, as well as the number of pixels to evaluate a color score.

In order to satisfy different color extraction requirements (for example, some people want to extract bright colors, and some people want to extract saturation color), Palette divides the color target into six kinds. High brightness Light class, general brightness of the Normarl class, dark brightness of the Dark class. Highly saturated Vibrant class, low saturated Mute class. They are freely matched, and you can come up with six patterns:

LIGHT_VIBRANT_MODE (high brightness and high saturation class)
VIBRANT_MODE (ordinary high brightness saturation class DARK_VIBRANT_MODE (dark) high brightness saturation class (LIGHT_MUTED_MODE) high brightness saturation (MUTED_MODE) ordinary brightness saturation (DARK_MUTED_MODE) dark brightness saturation)

Each color target model has its own unique Target parameters, i.e. S and L closer to the Target is the higher the score, the final comprehensive pixel number score, the highest score of the color which is we want to extract in the corresponding mode of the target color.

(2) how to solve the problem of color dispersion?

Another problem is that the colors are scattered. After all, the RGB value is exactly the same. For example, the blue sky, near where the sun will be whiter brighter, away from the place where the sun will be more and more saturated with pure blue. In the case of too much dispersion, it is easy to lose the color of blue to the more concentrated other colors, such as the green hills covered by the sea. At this point, we need to use a box to frame these similar but not identical blue, and then calculate their average color to represent them, that’s the concept of Palette in VBox. To further understand the VBox, I walk before analyzing [Click to view].

Palette solved these two problems and made the recognition effect more accurate! So, when you need to identify the main colors of the picture, don’t forget Palette.

4. how to use iOS-Palette?

In iOS-Palette, I took the idea of understanding more about TTAVPlayer[and clicking view]: “guaranteed minimal access costs while ensuring maximum scalability.””. For most of the requirements, you don’t need to know what PaletteTarget is, such as high brightness, low saturation, and you just need to call these simple API:

IOS picture accurate extraction of main tone algorithm iOS-Palette (Fu Yuanma)
Class:Palete
IOS picture accurate extraction of main tone algorithm iOS-Palette (Fu Yuanma)
Class:UIImage+Palette

You can get the callback:

The IOS picture accurate extraction of main tone algorithm iOS-Palette (Fu Yuanma)
callback function takes recommendColor

But when you need more color patterns, you can use them

IOS picture accurate extraction of main tone algorithm iOS-Palette (Fu Yuanma)
Class:UIImage+Palette

And you can also use | separators to meet the different needs of collocation use, when you need all the modes under the target color, ALL_MODE_PALETTE, quick access to all modes of color. These data will be brought back in the dictionary of allModeColorDic in callback Block.

Tips: recommended color logic is the preferred color VIBRANT_MODE, if this mode did not recognize the target color, it will pass according to the MUTE_MODE——LIGHT_VIRANT_MODE ——LIGHT_MUTE_MODE——DARK_VIBRANT_MODE——DARK_MUTE_MODE order of succession.

5.Demo’s effect list

(1) performance under the interference of white background

IOS picture accurate extraction of main tone algorithm iOS-Palette (Fu Yuanma)
effect diagram 1

(2) performance in a dark environment

IOS picture accurate extraction of main tone algorithm iOS-Palette (Fu Yuanma)
effect diagram 2

3. performance under normal lighting conditions

IOS picture accurate extraction of main tone algorithm iOS-Palette (Fu Yuanma)
effect diagram 3

Since each map is a bit large, more renderings can be viewed by clicking:

(1) [Click to view] (2) [Click to view] (3) [Click to view] (4) [Click to view] (5) [Click to view],
(6) [Click to view]

6. about the author

Know about: https://www.zhihu.com/people/tang-di-78/activities

Github:https://github.com/tangdiforx/iOSPalette

Jane: http://www.jianshu.com/p/01df6010dded