I am trying to do an alpha blending in c#.
So my actual code for this is:
final.Red = (pencil.Red * pencil.Alpha) + (background.Red * (1.0f - pencil.Alpha));
final.Green = (pencil.Green * pencil.Alpha) + (background.Green * (1.0f - pencil.Alpha));
final.Blue = (pencil.Blue * pencil.Alpha) + (background.Blue * (1.0f - pencil.Alpha));
This is working fine if the background pixel has no opacity.
But what is the calculation for the colors if the background pixel has opacity?
Ok I managed it by myself. Was not as complicated as I thought. Heres is my solution:
final.Red = (pencil.Red * pencil.Alpha) + (background.Red * (1.0f - pencil.Alpha));
final.Green = (pencil.Green * pencil.Alpha) + (background.Green * (1.0f - pencil.Alpha));
final.Blue = (pencil.Blue * pencil.Alpha) + (background.Blue * (1.0f - pencil.Alpha));
final.Alpha = background.Alpha + (1.0f - background.Alpha * pencil.Alpha);
With this it is working on every backgound opacity.
Related
i need to draw an arrow representing the rotation of a circle in WPF. The best i got so far is this one. But it seems that the arrowhead is not correctly aligned with the angle of the arrowarc.
private void InternalDrawArrowGeometry(StreamGeometryContext context)
{
var angleWidth = 45;
var startAngle = Rotation + 90;
var endAngle = Rotation + angleWidth + 90;
var xEnd = Radius * Math.Cos(startAngle * Math.PI / 180.0);
var yEnd = Radius * Math.Sin(startAngle * Math.PI / 180.0);
var xStart = Radius * Math.Cos(endAngle * Math.PI / 180.0);
var yStart = Radius * Math.Sin(endAngle * Math.PI / 180.0);
var b = angleWidth * Math.PI/180;
var pt1 = new Point(CentreX + xStart, CentreY - yStart);
var pt2 = new Point(CentreX + xEnd, CentreY - yEnd);
var len2 = 1;
const int angle = 45;
var pt3 = new Point(
pt2.X + (len2 / b) * ((pt1.X - pt2.X) * Math.Cos(angle) + (pt1.Y - pt2.Y) * Math.Sin(angle)),
pt2.Y + (len2 / b) * ((pt1.Y - pt2.Y) * Math.Cos(angle) - (pt1.X - pt2.X) * Math.Sin(angle)));
var pt4 = new Point(
pt2.X + (len2 / b) * ((pt1.X - pt2.X) * Math.Cos(angle) - (pt1.Y - pt2.Y) * Math.Sin(angle)),
pt2.Y + (len2 / b) * ((pt1.Y - pt2.Y) * Math.Cos(angle) + (pt1.X - pt2.X) * Math.Sin(angle)));
context.BeginFigure(pt1,
false, // Filled
false); // Closed
context.ArcTo(pt2,
new Size(Radius, Radius),
0.0, // rotationAngle
startAngle - endAngle > 180, // greater than 180 deg?
SweepDirection.Clockwise,
true, // isStroked
false);
context.LineTo(pt3, true, false);
context.LineTo(pt2, true, false);
context.LineTo(pt4, true, false);
}
Has anyone coded something like this correctly and give me the code or can tell me whats wrong with my code?
This question already has answers here:
Is there any way to draw an image to use 4 points rather than 3 (perspective warp)
(4 answers)
Perspective Image Transformation with tiling
(2 answers)
Closed 4 years ago.
I want to rotate a bitmap image constantly but the array of four elements of points that I created doesn't fit in this method. drawimage(image,points,srcrectangle,Graphicsunit).
I was reading microsft document about drawimage method and It says that it will work for 3 points(paralellogram). So I tried with 3 points, and It works as microsoft document says. But I need these four elements, perhaps I'm wrong , can someone just tell me how this method works?
public override void dibujar(Graphics area, Bitmap imagen)
{
radio = Math.Sqrt(Math.Pow(ancho, 2) + Math.Pow(largo, 2)) / 2;
Rectangle porcion = new Rectangle(indicex * ancho, indicey * largo, ancho, largo);
Point p1 = new Point(x + (int)(radio * Math.Sin(Math.PI * angulo / 180)), y+(int)(radio * Math.Cos(Math.PI * angulo / 180)));
Point p2 = new Point(x + ancho + (int)(radio * Math.Sin(Math.PI * (angulo-90) / 180)), y + (int)(radio * Math.Cos(Math.PI * (angulo - 90) / 180)));
Point p3 = new Point(x + (int)(radio * Math.Sin(Math.PI * (angulo - 180) / 180)), y - largo + (int)(radio * Math.Cos(Math.PI * (angulo - 180) / 180)));
Point p4 = new Point(x + ancho + (int)(radio * Math.Sin(Math.PI * (angulo - 270) / 180)), y - largo + (int)(radio * Math.Cos(Math.PI * (angulo - 270) / 180)));
Point[] points = { p2, p3, p4 }; // Get all points in one array
area.DrawImage(imagen, points, porcion, GraphicsUnit.Pixel);
if (angulo == 0)
{
angulo = 360;
}
else
{
angulo--;
}
x += dx;
}
This method throw NotimplementedException when my array is of four elements
I have been trying to draw equilateral triangles on the sides of a larger triangle. The first triangle is drawn by a separate method setting points A, B and C. So far I have just started with two sides, I am able to find the first two points of the smaller triangles, but I am unable to determine the correct formula for the third. I have tried refreshing my memory of trigonometry but I am at an impasse.
float a =0;
Point p = new Point(pnlDisplay.Width / 2 - (pnlDisplay.Width / 2) /3, 200);
Triangle t = new Triangle(p, pnlDisplay.Width / 3, 0);
drawEqTriangle(e, t);
Point p1 = new Point();
Point p2 = new Point();
Point p3 = new Point();
p1.X = Convert.ToInt32(A.X + t.size / 3);
p1.Y = Convert.ToInt32(A.Y);
p2.X = Convert.ToInt32(A.X + (t.size - t.size / 3));
p2.Y = Convert.ToInt32(A.Y);
//////////////////////////////
p3.X = Convert.ToInt32((A.X - t.size / 3) * Math.Sin(a));
p3.Y = Convert.ToInt32((A.Y - t.size / 3) * Math.Cos(a));
drawTriangle(e, p1, p2, p3);
p1.X = Convert.ToInt32((B.X - t.size / 3 * Math.Cos(t.angle + Math.PI / 3)));
p1.Y = Convert.ToInt32((B.Y + t.size / 3 * Math.Sin(t.angle+ Math.PI / 3)));
p2.X = Convert.ToInt32((B.X - (t.size - t.size / 3) * Math.Cos(t.angle + Math.PI / 3)));
p2.Y = Convert.ToInt32((B.Y + (t.size - t.size / 3) * Math.Sin(t.angle + Math.PI / 3)));
//////////////////////////////
p3.X = Convert.ToInt32((B.X - t.size / 3) * Math.Cos(a));
p3.Y = Convert.ToInt32((B.Y - t.size / 3) * Math.Tan(a));
drawTriangle(e, p1, p2, p3);
This may be a question for the math subsection, but I thought I would try here first. What I need is the formula for p3.X and p3.Y
Any help would be greatly appreciated.
EDIT: changing "a" to float a = Convert.ToSingle( 60 * Math.PI / 180);
results in this:
FINAL EDIT:
Using MBo's answer:
Let's build universal formulas for any triangle orientation (note that it is worth to use A[] array for big triangle instead of explicit A,B,C vertices)
p1.X = A.X * 2 / 3 + B.X / 3;
p1.Y = A.Y * 2 / 3 + B.Y / 3;
p2.X = A.X / 3 + B.X * 2 / 3;
p2.Y = A.Y / 3 + B.Y * 2 / 3;
D.X = (A.X - p1.X);
D.Y = (A.Y - p1.Y);
//note - angle sign depends on ABC orientation CW/CCW
p3.X = p1.X + D.X * Cos(2*Pi/3) - D.Y * Sin(2*Pi/3)
p3.Y = p1.Y + D.X * Sin(2*Pi/3) + D.Y * Cos(2*Pi/3)
I am trying to write an algorithm to convert my mouse click to 3D coordinates (to insert an object at this point).
I have "ground" level where Y = 0 and I want to calculate X and Z based on my mouse click. My function currently looks like that:
Point p = this.control.PointToClient(new Point(System.Windows.Forms.Cursor.Position.X, System.Windows.Forms.Cursor.Position.Y));
Vector3 pos = GeometryHelper.Unproject(new Vector3(p.X, 0f, p.Y), viewport.X, viewport.Y, viewport.Width, viewport.Height, projectionPlane.Near, projectionPlane.Far, Matrix4.Invert(mProjectionMatrix * camera.GetViewMatrix()));
active.applyGeometry(pos);
function applyGeometry simply sets the position of an object. I believe passed arguments are self-explanatory.
My Unproject function looks this way:
public static Vector3 Unproject(Vector3 vector, float x, float y, float width, float height, float minZ, float maxZ, Matrix4 inverseWorldViewProjection)
{
Vector4 result;
result.X = ((((vector.X - x) / width) * 2.0f) - 1.0f);
result.Y = ((((vector.Y - y) / height) * 2.0f) - 1.0f);
result.Z = (((vector.Z / (maxZ - minZ)) * 2.0f) - 1.0f);
result.X =
result.X * inverseWorldViewProjection.M11 +
result.Y * inverseWorldViewProjection.M21 +
result.Z * inverseWorldViewProjection.M31 +
inverseWorldViewProjection.M41;
result.Y =
result.X * inverseWorldViewProjection.M12 +
result.Y * inverseWorldViewProjection.M22 +
result.Z * inverseWorldViewProjection.M32 +
inverseWorldViewProjection.M42;
result.Z =
result.X * inverseWorldViewProjection.M13 +
result.Y * inverseWorldViewProjection.M23 +
result.Z * inverseWorldViewProjection.M33 +
inverseWorldViewProjection.M43;
result.W =
result.X * inverseWorldViewProjection.M14 +
result.Y * inverseWorldViewProjection.M24 +
result.Z * inverseWorldViewProjection.M34 +
inverseWorldViewProjection.M44;
result /= result.W;
return new Vector3(result.X, result.Y, result.Z);
}
The problem is that Unproject function returns result close to 0,0,0 and that makes my object appear around 0,0,0. Any idea how to modify it to work properly?
Update
I believe I have given enough details on this case, but in case you'd need something else to help me out just do not hesitate to tell me ; )
What I'm trying to do now is to get my COLLADA importer working on some transparent COLLADA model. The model blending is specified by the COLLADA specification, at charapter 7, paragraph "Rendering/Determining Transparency".
In short, there are two inputs for blend equation: Trasparent and Trasparency; the former can be a RGBA color or a texture, and the latter can be only a floating-point value. Additionally, Transparent can specify two blending equations (ColladaFxOpaqueType.AlphaOne and ColladaFxOpaqueType.RgbZero):
Here are the two blending equations:
// AlphaOne
//
// result.r = fb.r * (1.0f - transparent.a * transparency) + mat.r * (transparent.a * transparency)
// result.g = fb.g * (1.0f - transparent.a * transparency) + mat.g * (transparent.a * transparency)
// result.b = fb.b * (1.0f - transparent.a * transparency) + mat.b * (transparent.a * transparency)
// result.a = fb.a * (1.0f - transparent.a * transparency) + mat.a * (transparent.a * transparency)
// RgbZero
//
// result.r = fb.r * (transparent.r * transparency) + mat.r * (1.0f -transparent.r * transparency)
// result.g = fb.g * (transparent.g * transparency) + mat.g * (1.0f -transparent.g * transparency)
// result.b = fb.b * (transparent.b * transparency) + mat.b * (1.0f -transparent.b * transparency)
// result.a = fb.a * (luminance(transparent.rgb) * transparency) + mat.a * (1.0f - luminance(transparent.rgb) * transparency)
where
- result: draw framebuffer
- fb: destination blend color
- mat: source blend color
- transparent: COLLADA parameter described above
- transparency: COLLADA parameter described above
- luminance: function to average color following ITU-R Recommendation BT.709-4
What I've implemented now, is to get the geometry blended in the case Transparent represent a color (and both blending equations). Below there is the peculiar code implementing this feature:
internal void CompileBlendStateParameters(ColladaShaderParameters shaderParameters, ColladaFxCommonContext commonContext)
{
if (shaderParameters == null)
throw new ArgumentNullException("shaderParameters");
if (commonContext == null)
throw new ArgumentNullException("commonContext");
// Apply alpha blending, if required
if ((Transparent != null) || (Transparency != null)) {
BlendState blendState = null;
ColorRGBAF blendFactors = new ColorRGBAF(1.0f); // No effect value
float trasparency = 1.0f; // No effect value
if (Transparency != null)
trasparency = Transparency.GetValue(commonContext);
if ((Transparent != null) && (Transparent.IsFixedColor(commonContext) == true)) {
switch (Transparent.Opaque) {
case ColladaFxOpaqueType.AlphaOne:
// Equation from COLLADA specification:
//
// result.r = fb.r * (1.0f - transparent.a * transparency) + mat.r * (transparent.a * transparency)
// result.g = fb.g * (1.0f - transparent.a * transparency) + mat.g * (transparent.a * transparency)
// result.b = fb.b * (1.0f - transparent.a * transparency) + mat.b * (transparent.a * transparency)
// result.a = fb.a * (1.0f - transparent.a * transparency) + mat.a * (transparent.a * transparency)
// Determine blend factor constant color
blendFactors = new ColorRGBAF(Transparent.GetFixedColor(commonContext).Alpha);
// Modulate constant color
blendFactors = blendFactors * trasparency;
// Create blend state
blendState = new BlendState(BlendState.BlendEquation.Add, BlendState.BlendFactor.ConstColor, BlendState.BlendFactor.ConstColorComplement, blendFactors);
break;
case ColladaFxOpaqueType.RgbZero:
// Equation from COLLADA specification:
//
// result.r = fb.r * (transparent.r * transparency) + mat.r * (1.0f -transparent.r * transparency)
// result.g = fb.g * (transparent.g * transparency) + mat.g * (1.0f -transparent.g * transparency)
// result.b = fb.b * (transparent.b * transparency) + mat.b * (1.0f -transparent.b * transparency)
// result.a = fb.a * (luminance(transparent.rgb) * transparency) + mat.a * (1.0f - luminance(transparent.rgb) * transparency)
// Determine blend factor constant color
blendFactors = new ColorRGBAF(Transparent.GetFixedColor(commonContext));
// Define alpha blend factor as luminance
blendFactors.Alpha = blendFactors.Red * 0.212671f + blendFactors.Green * 0.715160f + blendFactors.Blue * 0.072169f;
// Modulate constant color
blendFactors = blendFactors * trasparency;
// Create blend state
blendState = new BlendState(BlendState.BlendEquation.Add, BlendState.BlendFactor.ConstColorComplement, BlendState.BlendFactor.ConstColor, blendFactors);
break;
}
} else if ((Transparent != null) && (Transparent.IsTextureColor(commonContext) == true)) {
throw new NotSupportedException();
} else {
// Modulate constant color
blendFactors = blendFactors * trasparency;
// Create blend state
blendState = new BlendState(BlendState.BlendEquation.Add, BlendState.BlendFactor.ConstColor, BlendState.BlendFactor.ConstColorComplement, blendFactors);
}
if (blendState != null)
shaderParameters.RenderState.DefineState(blendState);
}
}
Rougly, the code above abstracts the OpenGL layer, being equivalent to:
// AlphaOne equation
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_CONSTANT_COLOR, GL_ONE_MINUS_CONSTANT_COLOR);
glBlendColor(blendFactors.Red, blendFactors.Green, blendFactors.Blue, blendFactors.Alpha);
// RgbZero equation
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE_MINUS_CONSTANT_COLOR, GL_CONSTANT_COLOR);
glBlendColor(blendFactors.Red, blendFactors.Green, blendFactors.Blue, blendFactors.Alpha);
// Having calculated blendFactor appropriately!!
What I'd like is to support trasparencies based on texture (indeed removing that horrible NotSupportedException). Normally this would be implemented by attaching a texture to the outputted fragment alpha component, and setup blending as usual (Alpha and OneMinusAlpha blend factors), but sadly this is not possible with the above equations (alpha component wouln't be blended, isn't it?).
P.S. You can note that I've implemented blending by using a straightforward solution, but based on constant blend color (blendFactors variable in code) (indeed using GL_EXT_blend_color extension). How can I remove this dependency by using normal blending functions? I think that the solution to the last question could help me about blending based on texture...
I'm not quite sure I understand what you're going for, but I'll take a stab at it (feel free to follow up in the comments).
You want to implement the AlphaOne and RgbZero equations with standard opengl blending, and instead of using a constant color, you want your blend function to be evaluated for each texel of an image. The typical blend function for transparency (SRC_ALPHA, ONE_MINUS_SRC_ALPHA) uses the alpha value of the incoming fragment, and is evaluated as:
result = dst * (1-src_alpha) + src * src_alpha
Looking one at a time at the two equations you want to implement (just red and alpha for brevity):
AlphaOne:
result.r = fb.r * (1.0f - transparent.a * transparency) + mat.r * (transparent.a * transparency);
result.a = fb.a * (1.0f - transparent.a * transparency) + mat.a * (transparent.a * transparency);
If we look at this equation, we see that it looks very similar to the initial equation posted. All we have to do is substitute transparent.a*transparency for src_alpha.
This means if you take a pixel shader, with the value of transparent.a coming from a texture sample, and transparency as a uniform float, it will implement the AlphaOne function:
sampler2D tex;
uniform transparency;
main() {
vec4 texel = texture2D(tex,uv);
vec3 out_rgb = texel.rgb;
float out_alpha = texel.a * transparency;
gl_FragColor = vec4(out_rgb, out_alpha);
}
This shader submits transparent.a * transparency as the src_alpha value to be used in the blend equation.
I believe that this shows that you can easily implement this algorithm with typical opengl blending.
However the RGBZero function looks tougher to me, I don't believe there is any blending function that will achieve that. Just one oddball idea I have would be to draw the four color channels one at a time (lock G,B,A for editing and just draw R, with output alpha as your R blend factor, then repeat for the other 3 color channels. That looks like somewhat of an odd blending function to me, I can't think of what it would be used for though.