Education Standards
Monochrome conversion, Image Optimization
Overview
Working with HTML canvas element with JavaScript to reduce the color noise to an acceptable range for monochrome prepared for edge detection between 27color range
Color Depth and Monochrome Conversion
Introduction:
To maximise the number of recognizable objects within an image we must first reduce the 16,777,216 pixel combinations into a smaller number of edges. When there are only four color-ranges within each of the three light channels; image produces a color depth of 64.
Monochrome is usually a reduction from the (red, green, blue) 3 is odd to (true, false) bytes. However, data is lost from each color-edge when compressed.
3^3= 27 (minus 1 for zero); when each RGB is separated into quarters there will be 64 colors; And half produces 16.
256 divides into 85 three times (a whole number) and requires the thresholds to be multiplied by 86 on the error.
[ 0-84, 85-170, 171-255 ] R [ 0-84, 85-170, 171-255 ] G [ 0-84, 85-170, 171-255 ] B
Represented:
26 characters + zero is the commercial "@" <-- return string data type
The example is HTML JavaScript
In this language, the native Image Object is passed as the argument. ======================================================== </html> <script> function Twenty_seven(r255, g255, b255) { var iaw = Math.floor(r255 / 86) + (Math.floor(g255 / 86) * 3) + (Math.floor(b255 / 86) * 9); return String.fromCharCode(iaw + 64); } function c26NULL(DELTA_omit) { var A3 = ""; var AO = []; for(var nb = 0; nb < DELTA_omit.data.length; nb += 4) { A3 += Twenty_seven( DELTA_omit.data[nb + 0], DELTA_omit.data[nb + 1], DELTA_omit.data[nb + 2] ) } //console.log(A3); AO.push(A3); AO.push(DELTA_omit.width); AO.push(DELTA_omit.height); return AO; } ---------------------------------------------------------
Each "layer" is still 24bit and the Alpha-transparency always flattens to an opaque value.
<html> <script> function thE26(s26_, of27) { if(typeof(s26_[0]) === "string") { var ipt = s26_[0]; var threshes = []; for(var pxR = 0; pxR < ipt.length; pxR++) { threshes.push(!(ipt.charAt(pxR) === String.fromCharCode(of27 + 64))); } }else { console.log("conversion to Ternary diverted by Operating System"); return [false,false,false,false,false,false,false,false,false,0,0]; } threshes.push(s26_[1]); threshes.push(s26_[2]); return threshes; }
Depending on the focused paint field: each "chr"-color is iterated to locate the address of the potential object to a massive array (depending on the photo's size), because the shapes are edged at each point of binary comparison.
Please also refer to my teachings on edge-detection to apply these shapes to a reason.
https://oercommons.org/courseware/lesson/107797/overview