How to get Username and User Profile Picture from Facebook API in unity engine? - c#

I want to implement a user login in my unity game but I am unable to get the user profile picture from their Facebook id. The username is showing but not the profile picture. It is showing blank. I am also not getting any errors!
The sprite of the image is changing but not displaying on the screen.
Here is the code:
void DealWithFbMenus(bool isLoggedIn)
{
if (isLoggedIn)
{
FB.API("/me?fields=first_name", HttpMethod.GET, DisplayUsername);
FB.API("/me/picture?type=med", HttpMethod.GET, DisplayProfilePic);
}
}
void DisplayUsername(IResult result)
{
if (result.Error == null)
{
string name = "" + result.ResultDictionary["first_name"];
FB_userName.text = name;
Debug.Log("" + name);
}
else
{
Debug.Log(result.Error);
}
}
void DisplayProfilePic(IGraphResult result)
{
if (result.Error == null)
{
Debug.Log("Profile Pic");
FB_userDp.sprite = Sprite.Create(result.Texture, new Rect(0, 0, 128, 128), new Vector2());
}
else
{
Debug.Log(result.Error);
}
}

Sprite.Create takes
rect Rectangular section of the texture to use for the sprite.
I suspect that your hard coded 128 x 128 pixels is just a section and not the entire texture depending on the actual image dimensions of the picture.
And it also takes
pivot Sprite's pivot point relative to its graphic rectangle.
You are using new Vector2() which means the bottom left corner. In general for profile pictures I would rather assume that the pivot should be the center of the texture and use
Vector2.one * 0.5f
or
new Vector2(0.5f, 0.5f)
So asuming the download itself actually works and as you say you don't get errors you would probbaly rather use e.g.
FB_userDp.sprite = Sprite.Create(result.Texture, new Rect(0, 0, result.Texture.width, result.Texture.height), Vector2.one * 0.5f);
or if your goal is to use a square section no matter the dimenions you could use
var size = Mathf.Min(result.Texture.width, result.Texture.height);
FB_userDp.sprite = Sprite.Create(result.Texture, new Rect(0, 0, size, size), Vector2.one * 0.5f);

Related

How to create 2D map in unity using single image?

I have to create 2D map in unity using single image. So, i have one .png file with 5 different pieces out of which I have to create a map & i am not allowed to crop the image. So, how do create this map using only one image.
I am bit new to unity, i tried searching but didn't find exactly what i am looking for. Also tried, tilemap using Pallet but couldn't figure out how to extract only one portion of the image.
You can create various Sprites from the given texture on the fly in code.
You can define which part of a given Texture2D shall be used for the Sprite using Sprite.Create providing the rect in pixel coordinates of the given image. Remember however that in unity texture coordinates start bottom left.
Example use the given pixel coordinate snippet of a texture for the attached UI.Image component:
[RequireComponent(typeof(Image))]
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public Rect pixelCoordinates;
private void Start()
{
var newSprite = Sprite.Create(texture, pixelCoordinates, Vector2.one / 2f);
GetComponent<Image>().sprite = newSprite;
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
pixelCoordinates = new Rect();
return;
}
// reset to valid rect values
pixelCoordinates.x = Mathf.Clamp(pixelCoordinates.x, 0, texture.width);
pixelCoordinates.y = Mathf.Clamp(pixelCoordinates.y, 0, texture.height);
pixelCoordinates.width = Mathf.Clamp(pixelCoordinates.width, 0, pixelCoordinates.x + texture.width);
pixelCoordinates.height = Mathf.Clamp(pixelCoordinates.height, 0, pixelCoordinates.y + texture.height);
}
}
Or you get make a kind of manager class for generating all needed sprites once e.g. in a list like
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public List<Rect> pixelCoordinates = new List<Rect>();
// OUTPUT
public List<Sprite> resultSprites = new List<Sprite>();
private void Start()
{
foreach(var coordinates in pixelCoordinates)
{
var newSprite = Sprite.Create(texture, coordinates, Vector2.one / 2f);
resultSprites.Add(newSprite);
}
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
for(var i = 0; i < pixelCoordinates.Count; i++)
{
pixelCoordinates[i] = new Rect();
}
return;
}
for (var i = 0; i < pixelCoordinates.Count; i++)
{
// reset to valid rect values
var rect = pixelCoordinates[i];
rect.x = Mathf.Clamp(pixelCoordinates[i].x, 0, texture.width);
rect.y = Mathf.Clamp(pixelCoordinates[i].y, 0, texture.height);
rect.width = Mathf.Clamp(pixelCoordinates[i].width, 0, pixelCoordinates[i].x + texture.width);
rect.height = Mathf.Clamp(pixelCoordinates[i].height, 0, pixelCoordinates[i].y + texture.height);
pixelCoordinates[i] = rect;
}
}
}
Example:
I have 4 Image instances and configured them so the pixelCoordinates are:
imageBottomLeft: X=0, Y=0, W=100, H=100
imageTopLeft: X=0, Y=100, W=100, H=100
imageBottomRight: X=100, Y=0, W=100, H=100
imageTopRight: X=100, Y=100, W=100, H=100
The texture I used is 386 x 395 so I'm not using all of it here (just added the frames the Sprites are going to use)
so when hitting Play the following sprites are created:

Render to texture fails after resize

In a graphic application i am rendering an image to a texture, then i use that texture on a 3d model.
My problem is the following:
When the application starts everything is fine, but if i resize the view where i do the rendering and i make it bigger, the texture on the 3d model disappear (it doesnt turn black, i think that all values become 1). Making the image smaller doesnt make the texture to disappear, but it is shown incorrectly (not resized).
Here are some explanatory images:
Resize smaller
Not resized
Resize bigger, 1 pixel bigger is enough to make image disappear.
The code that generate the renderview is this:
private void CreateRenderToTexture(Panel view)
{
Texture2DDescription t2d = new Texture2DDescription()
{
Height = view.Height,
Width = view.Width,
Format = Format.R32G32B32A32_Float,
BindFlags = BindFlags.ShaderResource | BindFlags.RenderTarget, //| BindFlags.UnorderedAccess,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SampleDescription = new SampleDescription(_multisample, 0),
MipLevels = 1,
Usage = ResourceUsage.Default,
ArraySize = 1,
};
_svgTexture = new Texture2D(_device, t2d);
_svgRenderView = new RenderTargetView(_device, _svgTexture);
}
private void RenderSVGToTexture()
{
_camera.SetDefaultProjection();
UpdatePerFrameBuffers();
_dc.OutputMerger.SetTargets(_depthStencil, _svgRenderView);//depth stencil has same dimension as all other buffers
_dc.ClearRenderTargetView(_svgRenderView, new Color4(1.0f, 1.0f, 1.0f));
_dc.ClearDepthStencilView(_depthStencil, DepthStencilClearFlags.Depth | DepthStencilClearFlags.Stencil, 1.0f, 0);
Entity e;
if (RenderingManager.Scene.Entity2DExists("svgimage"))
{
RenderingManager.Scene.GetEntity2D("svgimage", out e);
e.Draw(_dc);
}
_swapChain.Present(0, PresentFlags.None);
}
When rendering the 3D scene i call this function before rendering the model:
private void SetTexture()
{
Entity e;
if (!RenderingManager.Scene.GetEntity3D("model3d", out e))
return;
e.ShaderType = ResourceManager.ShaderType.MAIN_MODEL;
if (ResourceManager.SVGTexture == null )
{
e.ShaderType = ResourceManager.ShaderType.PNUVNOTEX;
return;
}
SamplerDescription a = new SamplerDescription();
a.AddressU = TextureAddressMode.Wrap;
a.AddressV = TextureAddressMode.Wrap;
a.AddressW = TextureAddressMode.Wrap;
a.Filter = Filter.MinPointMagMipLinear;
SamplerState b = SamplerState.FromDescription(ResourceManager.Device, a);
ShaderResourceView svgTexResourceView = new ShaderResourceView(ResourceManager.Device, Texture2D.FromPointer(ResourceManager.SVGTexture.ComPointer));
ResourceManager.Device.ImmediateContext.PixelShader.SetShaderResource(svgTexResourceView, 0);
ResourceManager.Device.ImmediateContext.PixelShader.SetSampler(b, 0);
b.Dispose();
svgTexResourceView.Dispose();
}
Pixel shader:
Texture2D svg : register(t0);
Texture2D errorEstimate : register(t1);
SamplerState ss : register(s0);
float4 main(float4 position : SV_POSITION, float4 color : COLOR, float2 uv : UV) : SV_Target
{
return color * svg.Sample(ss, uv);// *errorEstimate.Sample(ss, uv);
}
I dont understand what i am doing wrong, i hope that you can make me see the mistake that i am doing. Thank you, and sorry for the bad english!
As it (almost) always turn out i was making a very stupid mistake.
I wasn't calling the correct resize function.
Basically in the Renderer2D class there is a DoResize function that does the resize of the 2d only buffers, while in the abstract Renderer class there is the rest of the buffers resizing. The mistake is that in the parent class i was calling the wrong base resize function!
Parent class:
protected override void DoResize(uint width, uint height)
{
if (width == 0 || height == 0)
return;
base.DoResize(width, height); //Here i was calling base.Resize (which was deprecated after a change in the application architecture)
_camera.Width = width;
_camera.Height = height;
_svgTexture.Dispose();
_svgRenderView.Dispose();
CreateRenderToTexture(_viewReference);
ResizePending = false;
}
Base class
protected virtual void DoResize(uint width, uint height)
{
Width = width;
Height = height;
_viewport = new Viewport(0, 0, Width, Height);
_renderTarget.Dispose();
if (_swapChain.ResizeBuffers(2, (int)width, (int)height, Format.Unknown, SwapChainFlags.AllowModeSwitch).IsFailure)
Console.WriteLine("An error occured while resizing buffers.");
using (var resource = Resource.FromSwapChain<Texture2D>(_swapChain, 0))
_renderTarget = new RenderTargetView(_device, resource);
_depthStencil.Dispose();
CreateDepthBuffer();
}
Maybe the code i posted can be of help for someone who is trying to do some render to texture stuff, since i see that there is always people that can't make it work :)

Generating an alpha mask from a texture

I am trying to create a function in one of my helper classes that will take a texture (left) and then generate an alpha mask (right)
Here's what I have so far:
public Texture2D CreateAlphaMask(Texture2D texture)
{
if (texture == null)
return null;
{
RenderTarget2D target = new RenderTarget2D(Device, texture.Width, texture.Height);
Device.SetRenderTarget(target);
Device.Clear(Color.Black);
using (SpriteBatch batch = new SpriteBatch(Device))
{
BlendState blendState = new BlendState()
{
AlphaBlendFunction = BlendFunction.Max,
AlphaSourceBlend = Blend.One,
AlphaDestinationBlend = Blend.One,
ColorBlendFunction = BlendFunction.Add,
ColorSourceBlend = Blend.InverseDestinationColor,
ColorDestinationBlend = Blend.Zero,
BlendFactor = Color.White,
ColorWriteChannels = ColorWriteChannels.All
};
batch.Begin(0, blendState);
batch.Draw(texture, Vector2.Zero, Color.White);
batch.End();
}
Device.SetRenderTarget(null);
return target;
}
}
What should be happening is that if alpha=0, then the pixel is black, and if alpha=1, then the pixel is white (and interpolated between these values if needed).
However, I can't seem to make it go "whiter" than the base image on the left. That is, if I set it to blend white, then at most it will go to the grey tones that I have, but never brighter. This isn't something I can create in advance, either, as it must be calculated during the game.

MonoTouch: Memory leak when drawing PDF on a custom graphics context

I have an app that's basically a fancy PDF reader. I download a PDF from the internet and generate thumbnails for that PDF. However, it seems that when I generate these thumbnails a lot of memory is being allocated (checked using Instruments), sometimes parts of this is collected by the GC but in the end, my app gives up. I've had memory usage of up to 38Mb when generating thumbnails for a single PDF (100x100 thumbs, ~60 pages).
I generate one thumbnail at a time, store it and then repeat the process, so under any circumstance there should only be one thumbnail in memory (while generating them, at least). My code for generating thumbnails looks like this:
public UIImage GetPageThumbnail(int pageNumber, SizeF size)
{
//If using retina display, make sure to scale-up the thumbnail as well.
size.Width = size.Width * UIScreen.MainScreen.Scale;
size.Height = size.Height * UIScreen.MainScreen.Scale;
UIGraphics.BeginImageContext(size);
CGContext tempContext = UIGraphics.GetCurrentContext();
CGPDFPage page = Document.GetPage(pageNumber);
RectangleF drawArea = new RectangleF(new PointF(0f, 0f), size);
CGAffineTransform transform = page.GetDrawingTransform( CGPDFBox.Crop, drawArea, 180, true); //fit PDF to context
transform.xx = -transform.xx; // }
transform.x0 = 0; // }flip horizontally
//Console.WriteLine("XX: " + transform.xx + ", YX:" + transform.yx + ", XY:" + transform.xy + ", YY:" + transform.yy + ", X0:" + transform.x0 + ", Y0:" + transform.y0);
tempContext.ConcatCTM(transform);
tempContext.DrawPDFPage (page);
UIImage returnImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
return returnImage;
}
I've tried explicitly disposing the context and PDF page, but that had no effect (actually it seemed worse, but take that with a pinch of salt).
I've seen some posts about memory leakage with MonoTouch and PDF (basically this post), but that's pretty old. I'm using the newest MonoTouch (5.0.2).
Not sure where the problem in your code is, but here's my code for generating thumbs of PDF pages. It is working flawlessly. Maybe it helps you. I think your issue might be what you are doing with the returned image when you're don.
public static UIImageView GetLowResPagePreview (CGPDFPage oPdfPage, RectangleF oTargetRect)
{
RectangleF oOriginalPdfPageRect = oPdfPage.GetBoxRect (CGPDFBox.Media);
RectangleF oPdfPageRect = PdfViewerHelpers.RotateRectangle( oPdfPage.GetBoxRect (CGPDFBox.Media), oPdfPage.RotationAngle);
// Create a low res image representation of the PDF page to display before the TiledPDFView
// renders its content.
int iWidth = Convert.ToInt32 ( oPdfPageRect.Size.Width );
int iHeight = Convert.ToInt32 ( oPdfPageRect.Size.Height );
CGColorSpace oColorSpace = CGColorSpace.CreateDeviceRGB();
CGBitmapContext oContext = new CGBitmapContext(null, iWidth, iHeight, 8, iWidth * 4, oColorSpace, CGImageAlphaInfo.PremultipliedLast);
// First fill the background with white.
oContext.SetFillColor (1.0f, 1.0f, 1.0f, 1.0f);
oContext.FillRect (oOriginalPdfPageRect);
// Scale the context so that the PDF page is rendered
// at the correct size for the zoom level.
oContext.ConcatCTM ( oPdfPage.GetDrawingTransform ( CGPDFBox.Media, oPdfPageRect, 0, true ) );
oContext.DrawPDFPage (oPdfPage);
CGImage oImage = oContext.ToImage();
UIImage oBackgroundImage = UIImage.FromImage( oImage );
oContext.Dispose();
oImage.Dispose ();
oColorSpace.Dispose ();
UIImageView oBackgroundImageView = new UIImageView (oBackgroundImage);
oBackgroundImageView.Frame = new RectangleF (new PointF (0, 0), oPdfPageRect.Size);
oBackgroundImageView.ContentMode = UIViewContentMode.ScaleToFill;
oBackgroundImageView.UserInteractionEnabled = false;
oBackgroundImageView.AutoresizingMask = UIViewAutoresizing.None;
return oBackgroundImageView;
}

OpenGL ES 2.0 / MonoTouch: Texture is colorized red

I'm currently loading a cube-map into my application but it's shown in a red tone.
Edit: The channel problem is also present when using 2D-Textures, it seems the channels are not in the correct order. Is there any way to change the order of the channels using the iOS methods?
That's the code for texture loading:
public TextureCube (Generic3DView device, UIImage right, UIImage left, UIImage top, UIImage bottom, UIImage front, UIImage back)
: base(device)
{
_Device = device;
GL.GenTextures (1, ref _Handle);
GL.BindTexture (TextureType, _Handle);
LoadTexture(All.TextureCubeMapPositiveX, right);
LoadTexture(All.TextureCubeMapNegativeX, left);
LoadTexture(All.TextureCubeMapPositiveY, top);
LoadTexture(All.TextureCubeMapNegativeY, bottom);
LoadTexture(All.TextureCubeMapPositiveZ, front);
LoadTexture(All.TextureCubeMapNegativeZ, back);
GL.TexParameter(All.TextureCubeMap, All.TextureMinFilter, (Int32)All.LinearMipmapLinear);
GL.TexParameter(All.TextureCubeMap, All.TextureMagFilter, (Int32)All.Linear);
GL.GenerateMipmap(All.TextureCubeMap);
}
private void LoadTexture(All usage, UIImage image)
{
GL.TexImage2D(usage, 0, (Int32)All.Rgba, (Int32)image.Size.Width,
(Int32)image.Size.Height, 0, All.Rgba, All.UnsignedByte, RequestImagePixelData(image));
}
protected CGBitmapContext CreateARGBBitmapContext (CGImage inImage)
{
var pixelsWide = inImage.Width;
var pixelsHigh = inImage.Height;
var bitmapBytesPerRow = pixelsWide * 4;
var bitmapByteCount = bitmapBytesPerRow * pixelsHigh;
//Note implicit colorSpace.Dispose()
using (var colorSpace = CGColorSpace.CreateDeviceRGB()) {
//Allocate the bitmap and create context
var bitmapData = Marshal.AllocHGlobal (bitmapByteCount);
if (bitmapData == IntPtr.Zero) {
throw new Exception ("Memory not allocated.");
}
var context = new CGBitmapContext (bitmapData, pixelsWide, pixelsHigh, 8,
bitmapBytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedFirst);
if (context == null) {
throw new Exception ("Context not created");
}
return context;
}
}
//Store pixel data as an ARGB Bitmap
protected IntPtr RequestImagePixelData (UIImage inImage)
{
var imageSize = inImage.Size;
CGBitmapContext ctxt = CreateARGBBitmapContext (inImage.CGImage);
var rect = new RectangleF (0.0f, 0.0f, imageSize.Width, imageSize.Height);
ctxt.DrawImage (rect, inImage.CGImage);
var data = ctxt.Data;
return data;
}
I think the channels are inverted, but maybe there is a way to invert the bitmap without some custom code.
This is the image which is rendered( ignore the fancy model in front of it ):
And the expected image:
Edit:
The GL_INVALID_OPERATION issue has been fixed, but it does not solve the issue with the red texture.
The vertex-shader:
attribute vec3 position;
uniform mat4 modelViewMatrix;
varying mediump vec3 texture;
void main()
{
texture = position.xyz;
gl_Position = modelViewMatrix * vec4(position.xyz, 1.0);
}
The fragment-shader:
varying mediump vec3 texture;
uniform samplerCube cubeMap;
void main()
{
mediump vec3 cube = vec3(textureCube(cubeMap, texture));
gl_FragColor = vec4(cube.xyz, 1.0);
}
The problem is your function CreateARGBBitmapContext the line
var context = new CGBitmapContext (bitmapData, pixelsWide, pixelsHigh, 8, bitmapBytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedFirst);
If you change
CGImageAlphaInfo.PremultipliedFirst
to
CGImageAlphaInfo.PremultipliedLast
that should fix your code.
After some testing i decided to use the code from "XnaTouch" to load textures, this solves the problem with the red texture.
Of course this has not been the end, because there has been no alpha channel when loading png images. Because this is not acceptable and consumes to much time i decided to write a dds loader ( based on code from http://humus.name/ ).
Did you use the program (with glUseProgram) before using glUniform? Because it doesn't work and would generate that error in this case.
You can also check what are the causes for that GL error in the glUniform man page (at the end).
I see that you are using RGBA for both settings during your TexImage2D step. Judging by how blue your original image and how red your resulting image is, I suggest swapping one of them for BGRA.

Categories

Resources