I am working on a project using ARcore.
I need a real world screen that is visible on the ARcore camera, formerly using the method of erasing UI and capturing.
But it was so slow that I found Frame.CameraImage.Texture in Arcore API
It worked normally in a Unity Editor environment.
But if you build it on your phone and check it out, Texture is null.
Texture2D snap = (Texture2D)Frame.CameraImage.Texture;
What is the reason? maybe CPU problem?
and I tried to do a different function.
public class TestFrameCamera : MonoBehaviour
{
private Texture2D _texture;
private TextureFormat _format = TextureFormat.RGBA32;
// Use this for initialization
void Start()
{
_texture = new Texture2D(Screen.width, Screen.height, _format, false, false);
}
// Update is called once per frame
void Update()
{
using (var image = Frame.CameraImage.AcquireCameraImageBytes())
{
if (!image.IsAvailable) return;
int size = image.Width * image.Height;
byte[] yBuff = new byte[size];
System.Runtime.InteropServices.Marshal.Copy(image.Y, yBuff, 0, size);
_texture.LoadRawTextureData(yBuff);
_texture.Apply();
this.GetComponent<RawImage>().texture = _texture;
}
}
}
But if I change the texture format, it will come out.
private TextureFormat _format = TextureFormat.R8;
it is work, but i don't want to red color image, i want to rgb color
what i should do?
R8 Just red data.
You can use TextureFormat.RGBA32, and set buffer like this:
IntPtr _buff = Marshal.AllocHGlobal(width * height*4);
Related
If any of you had dealt with the same problem could you please tell me the solution.
User can upload his own image from phone gallery to the UI Image, then he can move it, scale it, rotate it, mirror it. After such manipulations he can save the Texture2D of this Image to the persistentDataPath with the following code. But the problem is that no matter which rotation, position or scale properties the UI Image has, Texture2D will still remain default which is 0pos, 0rotation and scale 1 (which actually has logic cause the texture is the same as I only changed the rect transform of the image.
public void SaveClick()
{
CropSprite();
SaveSprite();
}
private Texture2D output;
public void CropSprite()
{
Texture2D MaskTexture = MaskImage.sprite.texture;
//for this part MaskImage is the parent of the Image which is used to actually mask Image with which user is manipulating edits
//attached the image for better understanding what is mask and what is editable area
Texture2D originalTextureTexture = TextureMarble.sprite.texture;
Texture2D TextureTexture = Resize(originalTextureTexture,250,250);
//I rescale editable texture to the size of the mask one, otherwise big size images will be saved incorrectly
output = new Texture2D(TextureTexture.width,TextureTexture.height);
for (int i = 0; i < TextureTexture.width; i++)
{
for (int j = 0; j < TextureTexture.height; j++)
{
if (MaskTexture.GetPixel(i, j).a != 0)
output.SetPixel(i, j, TextureTexture.GetPixel(i, j));
else
output.SetPixel(i, j, new Color(1f, 1f, 1f, 0f));
}
}
//save only the part of editable texture which is overlapping mask image
output.Apply();
}
Texture2D Resize(Texture2D texture2D,int targetX,int targetY)
{
RenderTexture rt=new RenderTexture(targetX, targetY,24);
RenderTexture.active = rt;
Graphics.Blit(texture2D,rt);
Texture2D result=new Texture2D(targetX,targetY);
result.ReadPixels(new Rect(0,0,targetX,targetY),0,0);
result.Apply();
return result;
}
public void SaveSprite()
{
byte[] bytesToSave = output.EncodeToPNG();
File.WriteAllBytes(Application.persistentDataPath + "/yourTexture1.png",bytesToSave);
}
not necessary but for those of you who didn't understand what is mask in my case
So how to save Texture2D with the rect transform properties of the Image?
I am attempting to stream out the current in game view over a WebRTC connection. My goal is to capture what the user is seeing as RGB 24BPP byte array. I am currently able to stream an empty Texture2D. I would like to populate the empty texture with the OVRCamerRig's current in game view.
I am not a strong Unity developer, but I assumed it might look something like this:
private Texture2D tex;
private RenderTexture rt;
private OVRCameraRig oVRCameraRig;
void Start() {
// I only have 1 camera rig
oVRCameraRig = GameObject.FindObjectOfType<OVRCameraRig>();
tex = new Texture2D(640, 480, TextureFormat.RGB24, false);
rt = new RenderTexture(640, 480, 8, UnityEngine.Experimental.Rendering.GraphicsFormat.R8G8B8_SRGB);
}
public void Update() {
oVRCameraRig.leftEyeCamera.targetTexture = rt;
RenderTexture.active = rt;
tex.ReadPixels(new Rect(0, 0, rt.width, rt.height), 0, 0);
tex.Apply();
}
I have to create 2D map in unity using single image. So, i have one .png file with 5 different pieces out of which I have to create a map & i am not allowed to crop the image. So, how do create this map using only one image.
I am bit new to unity, i tried searching but didn't find exactly what i am looking for. Also tried, tilemap using Pallet but couldn't figure out how to extract only one portion of the image.
You can create various Sprites from the given texture on the fly in code.
You can define which part of a given Texture2D shall be used for the Sprite using Sprite.Create providing the rect in pixel coordinates of the given image. Remember however that in unity texture coordinates start bottom left.
Example use the given pixel coordinate snippet of a texture for the attached UI.Image component:
[RequireComponent(typeof(Image))]
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public Rect pixelCoordinates;
private void Start()
{
var newSprite = Sprite.Create(texture, pixelCoordinates, Vector2.one / 2f);
GetComponent<Image>().sprite = newSprite;
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
pixelCoordinates = new Rect();
return;
}
// reset to valid rect values
pixelCoordinates.x = Mathf.Clamp(pixelCoordinates.x, 0, texture.width);
pixelCoordinates.y = Mathf.Clamp(pixelCoordinates.y, 0, texture.height);
pixelCoordinates.width = Mathf.Clamp(pixelCoordinates.width, 0, pixelCoordinates.x + texture.width);
pixelCoordinates.height = Mathf.Clamp(pixelCoordinates.height, 0, pixelCoordinates.y + texture.height);
}
}
Or you get make a kind of manager class for generating all needed sprites once e.g. in a list like
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public List<Rect> pixelCoordinates = new List<Rect>();
// OUTPUT
public List<Sprite> resultSprites = new List<Sprite>();
private void Start()
{
foreach(var coordinates in pixelCoordinates)
{
var newSprite = Sprite.Create(texture, coordinates, Vector2.one / 2f);
resultSprites.Add(newSprite);
}
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
for(var i = 0; i < pixelCoordinates.Count; i++)
{
pixelCoordinates[i] = new Rect();
}
return;
}
for (var i = 0; i < pixelCoordinates.Count; i++)
{
// reset to valid rect values
var rect = pixelCoordinates[i];
rect.x = Mathf.Clamp(pixelCoordinates[i].x, 0, texture.width);
rect.y = Mathf.Clamp(pixelCoordinates[i].y, 0, texture.height);
rect.width = Mathf.Clamp(pixelCoordinates[i].width, 0, pixelCoordinates[i].x + texture.width);
rect.height = Mathf.Clamp(pixelCoordinates[i].height, 0, pixelCoordinates[i].y + texture.height);
pixelCoordinates[i] = rect;
}
}
}
Example:
I have 4 Image instances and configured them so the pixelCoordinates are:
imageBottomLeft: X=0, Y=0, W=100, H=100
imageTopLeft: X=0, Y=100, W=100, H=100
imageBottomRight: X=100, Y=0, W=100, H=100
imageTopRight: X=100, Y=100, W=100, H=100
The texture I used is 386 x 395 so I'm not using all of it here (just added the frames the Sprites are going to use)
so when hitting Play the following sprites are created:
Is it possible to generate 2D avatar portrait pictures(.png) of 3D characters/objects in unity, and would it be advisable.
During my game, I want to dynamically generate and show a list of characters/objects in a scrollbar UI component, and i'm too lazy to actually go make these 2D images manually.
I want to know if it is possible to generate a list of character/object portraits from a set of 3D prefabs in to display, or if it would be more advisable to rather manually generate pictures and add the pictures to the as assets.
Apart from being lazy, this will then also be a lot easier to add characters/objects to my project and to maintain them if they are changed.
You can use a script like this to take a pic of the scene. So you could instantiate somewhere the gameobject, with a specific orientation, background, illumination, distance to the camera... Then you take the screenshot and store it somewhere with your other assets.
using UnityEngine;
using System.Collections;
public class HiResScreenShots : MonoBehaviour {
public int resWidth = 2550;
public int resHeight = 3300;
private bool takeHiResShot = false;
public static string ScreenShotName(int width, int height) {
return string.Format("{0}/screenshots/screen_{1}x{2}_{3}.png",
Application.dataPath,
width, height,
System.DateTime.Now.ToString("yyyy-MM-dd_HH-mm-ss"));
}
public void TakeHiResShot() {
takeHiResShot = true;
}
void LateUpdate() {
takeHiResShot |= Input.GetKeyDown("k");
if (takeHiResShot) {
RenderTexture rt = new RenderTexture(resWidth, resHeight, 24);
camera.targetTexture = rt;
Texture2D screenShot = new Texture2D(resWidth, resHeight, TextureFormat.RGB24, false);
camera.Render();
RenderTexture.active = rt;
screenShot.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
camera.targetTexture = null;
RenderTexture.active = null; // JC: added to avoid errors
Destroy(rt);
byte[] bytes = screenShot.EncodeToPNG();
string filename = ScreenShotName(resWidth, resHeight);
System.IO.File.WriteAllBytes(filename, bytes);
Debug.Log(string.Format("Took screenshot to: {0}", filename));
takeHiResShot = false;
}
}
}
I'm currently loading a cube-map into my application but it's shown in a red tone.
Edit: The channel problem is also present when using 2D-Textures, it seems the channels are not in the correct order. Is there any way to change the order of the channels using the iOS methods?
That's the code for texture loading:
public TextureCube (Generic3DView device, UIImage right, UIImage left, UIImage top, UIImage bottom, UIImage front, UIImage back)
: base(device)
{
_Device = device;
GL.GenTextures (1, ref _Handle);
GL.BindTexture (TextureType, _Handle);
LoadTexture(All.TextureCubeMapPositiveX, right);
LoadTexture(All.TextureCubeMapNegativeX, left);
LoadTexture(All.TextureCubeMapPositiveY, top);
LoadTexture(All.TextureCubeMapNegativeY, bottom);
LoadTexture(All.TextureCubeMapPositiveZ, front);
LoadTexture(All.TextureCubeMapNegativeZ, back);
GL.TexParameter(All.TextureCubeMap, All.TextureMinFilter, (Int32)All.LinearMipmapLinear);
GL.TexParameter(All.TextureCubeMap, All.TextureMagFilter, (Int32)All.Linear);
GL.GenerateMipmap(All.TextureCubeMap);
}
private void LoadTexture(All usage, UIImage image)
{
GL.TexImage2D(usage, 0, (Int32)All.Rgba, (Int32)image.Size.Width,
(Int32)image.Size.Height, 0, All.Rgba, All.UnsignedByte, RequestImagePixelData(image));
}
protected CGBitmapContext CreateARGBBitmapContext (CGImage inImage)
{
var pixelsWide = inImage.Width;
var pixelsHigh = inImage.Height;
var bitmapBytesPerRow = pixelsWide * 4;
var bitmapByteCount = bitmapBytesPerRow * pixelsHigh;
//Note implicit colorSpace.Dispose()
using (var colorSpace = CGColorSpace.CreateDeviceRGB()) {
//Allocate the bitmap and create context
var bitmapData = Marshal.AllocHGlobal (bitmapByteCount);
if (bitmapData == IntPtr.Zero) {
throw new Exception ("Memory not allocated.");
}
var context = new CGBitmapContext (bitmapData, pixelsWide, pixelsHigh, 8,
bitmapBytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedFirst);
if (context == null) {
throw new Exception ("Context not created");
}
return context;
}
}
//Store pixel data as an ARGB Bitmap
protected IntPtr RequestImagePixelData (UIImage inImage)
{
var imageSize = inImage.Size;
CGBitmapContext ctxt = CreateARGBBitmapContext (inImage.CGImage);
var rect = new RectangleF (0.0f, 0.0f, imageSize.Width, imageSize.Height);
ctxt.DrawImage (rect, inImage.CGImage);
var data = ctxt.Data;
return data;
}
I think the channels are inverted, but maybe there is a way to invert the bitmap without some custom code.
This is the image which is rendered( ignore the fancy model in front of it ):
And the expected image:
Edit:
The GL_INVALID_OPERATION issue has been fixed, but it does not solve the issue with the red texture.
The vertex-shader:
attribute vec3 position;
uniform mat4 modelViewMatrix;
varying mediump vec3 texture;
void main()
{
texture = position.xyz;
gl_Position = modelViewMatrix * vec4(position.xyz, 1.0);
}
The fragment-shader:
varying mediump vec3 texture;
uniform samplerCube cubeMap;
void main()
{
mediump vec3 cube = vec3(textureCube(cubeMap, texture));
gl_FragColor = vec4(cube.xyz, 1.0);
}
The problem is your function CreateARGBBitmapContext the line
var context = new CGBitmapContext (bitmapData, pixelsWide, pixelsHigh, 8, bitmapBytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedFirst);
If you change
CGImageAlphaInfo.PremultipliedFirst
to
CGImageAlphaInfo.PremultipliedLast
that should fix your code.
After some testing i decided to use the code from "XnaTouch" to load textures, this solves the problem with the red texture.
Of course this has not been the end, because there has been no alpha channel when loading png images. Because this is not acceptable and consumes to much time i decided to write a dds loader ( based on code from http://humus.name/ ).
Did you use the program (with glUseProgram) before using glUniform? Because it doesn't work and would generate that error in this case.
You can also check what are the causes for that GL error in the glUniform man page (at the end).
I see that you are using RGBA for both settings during your TexImage2D step. Judging by how blue your original image and how red your resulting image is, I suggest swapping one of them for BGRA.