iphone - Gaussian filter with OpenGL Shaders -
i trying learn shaders implement in iphone app. far have understood easy examples making color image gray scale, thresholding, etc. of examples involve simple operations in processing input image pixel i(x,y)
results in simple modification of colors of same pixel
but, how convolutions?. example, easiest example gaussian filter,
in output image pixel o(x,y)
depends not on i(x,y)
on surrounding 8 pixels.
o(x,y) = (i(x,y)+ surrounding 8 pixels values)/9;
normally, cannot done 1 single image buffer or input pixels change filter performed. how can shaders? also, should handle borders myself? or there built-it function or check invalid pixel access i(-1,-1)
?
thanks in advance
ps: generous(read:give lot of points) ;)
a highly optimized shader-based approach performing nine-hit gaussian blur was presented daniel rákos. process uses underlying interpolation provided texture filtering in hardware perform nine-hit filter using 5 texture reads per pass. split separate horizontal , vertical passes further reduce number of texture reads required.
i rolled implementation of this, tuned opengl es , ios gpus, my image processing framework (under gpuimagefastblurfilter class). in tests, can perform single blur pass of 640x480 frame in 2.0 ms on iphone 4, pretty fast.
i used following vertex shader:
attribute vec4 position; attribute vec2 inputtexturecoordinate; uniform mediump float texelwidthoffset; uniform mediump float texelheightoffset; varying mediump vec2 centertexturecoordinate; varying mediump vec2 onesteplefttexturecoordinate; varying mediump vec2 twostepslefttexturecoordinate; varying mediump vec2 onesteprighttexturecoordinate; varying mediump vec2 twostepsrighttexturecoordinate; void main() { gl_position = position; vec2 firstoffset = vec2(1.3846153846 * texelwidthoffset, 1.3846153846 * texelheightoffset); vec2 secondoffset = vec2(3.2307692308 * texelwidthoffset, 3.2307692308 * texelheightoffset); centertexturecoordinate = inputtexturecoordinate; onesteplefttexturecoordinate = inputtexturecoordinate - firstoffset; twostepslefttexturecoordinate = inputtexturecoordinate - secondoffset; onesteprighttexturecoordinate = inputtexturecoordinate + firstoffset; twostepsrighttexturecoordinate = inputtexturecoordinate + secondoffset; }
and following fragment shader:
precision highp float; uniform sampler2d inputimagetexture; varying mediump vec2 centertexturecoordinate; varying mediump vec2 onesteplefttexturecoordinate; varying mediump vec2 twostepslefttexturecoordinate; varying mediump vec2 onesteprighttexturecoordinate; varying mediump vec2 twostepsrighttexturecoordinate; // const float weight[3] = float[]( 0.2270270270, 0.3162162162, 0.0702702703 ); void main() { lowp vec3 fragmentcolor = texture2d(inputimagetexture, centertexturecoordinate).rgb * 0.2270270270; fragmentcolor += texture2d(inputimagetexture, onesteplefttexturecoordinate).rgb * 0.3162162162; fragmentcolor += texture2d(inputimagetexture, onesteprighttexturecoordinate).rgb * 0.3162162162; fragmentcolor += texture2d(inputimagetexture, twostepslefttexturecoordinate).rgb * 0.0702702703; fragmentcolor += texture2d(inputimagetexture, twostepsrighttexturecoordinate).rgb * 0.0702702703; gl_fragcolor = vec4(fragmentcolor, 1.0); }
to perform this. 2 passes can achieved sending 0 value texelwidthoffset
(for vertical pass), , feeding result run give 0 value texelheightoffset
(for horizontal pass).
i have more advanced examples of convolutions in above-linked framework, including sobel edge detection.
Comments
Post a Comment