Is it possible to generate 2D avatar portrait pictures(.png) of 3D characters/objects in unity, and would it be advisable.
During my game, I want to dynamically generate and show a list of characters/objects in a scrollbar UI component, and i'm too lazy to actually go make these 2D images manually.
I want to know if it is possible to generate a list of character/object portraits from a set of 3D prefabs in to display, or if it would be more advisable to rather manually generate pictures and add the pictures to the as assets.
Apart from being lazy, this will then also be a lot easier to add characters/objects to my project and to maintain them if they are changed.
You can use a script like this to take a pic of the scene. So you could instantiate somewhere the gameobject, with a specific orientation, background, illumination, distance to the camera... Then you take the screenshot and store it somewhere with your other assets.
using UnityEngine;
using System.Collections;
public class HiResScreenShots : MonoBehaviour {
public int resWidth = 2550;
public int resHeight = 3300;
private bool takeHiResShot = false;
public static string ScreenShotName(int width, int height) {
return string.Format("{0}/screenshots/screen_{1}x{2}_{3}.png",
Application.dataPath,
width, height,
System.DateTime.Now.ToString("yyyy-MM-dd_HH-mm-ss"));
}
public void TakeHiResShot() {
takeHiResShot = true;
}
void LateUpdate() {
takeHiResShot |= Input.GetKeyDown("k");
if (takeHiResShot) {
RenderTexture rt = new RenderTexture(resWidth, resHeight, 24);
camera.targetTexture = rt;
Texture2D screenShot = new Texture2D(resWidth, resHeight, TextureFormat.RGB24, false);
camera.Render();
RenderTexture.active = rt;
screenShot.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
camera.targetTexture = null;
RenderTexture.active = null; // JC: added to avoid errors
Destroy(rt);
byte[] bytes = screenShot.EncodeToPNG();
string filename = ScreenShotName(resWidth, resHeight);
System.IO.File.WriteAllBytes(filename, bytes);
Debug.Log(string.Format("Took screenshot to: {0}", filename));
takeHiResShot = false;
}
}
}
Related
If any of you had dealt with the same problem could you please tell me the solution.
User can upload his own image from phone gallery to the UI Image, then he can move it, scale it, rotate it, mirror it. After such manipulations he can save the Texture2D of this Image to the persistentDataPath with the following code. But the problem is that no matter which rotation, position or scale properties the UI Image has, Texture2D will still remain default which is 0pos, 0rotation and scale 1 (which actually has logic cause the texture is the same as I only changed the rect transform of the image.
public void SaveClick()
{
CropSprite();
SaveSprite();
}
private Texture2D output;
public void CropSprite()
{
Texture2D MaskTexture = MaskImage.sprite.texture;
//for this part MaskImage is the parent of the Image which is used to actually mask Image with which user is manipulating edits
//attached the image for better understanding what is mask and what is editable area
Texture2D originalTextureTexture = TextureMarble.sprite.texture;
Texture2D TextureTexture = Resize(originalTextureTexture,250,250);
//I rescale editable texture to the size of the mask one, otherwise big size images will be saved incorrectly
output = new Texture2D(TextureTexture.width,TextureTexture.height);
for (int i = 0; i < TextureTexture.width; i++)
{
for (int j = 0; j < TextureTexture.height; j++)
{
if (MaskTexture.GetPixel(i, j).a != 0)
output.SetPixel(i, j, TextureTexture.GetPixel(i, j));
else
output.SetPixel(i, j, new Color(1f, 1f, 1f, 0f));
}
}
//save only the part of editable texture which is overlapping mask image
output.Apply();
}
Texture2D Resize(Texture2D texture2D,int targetX,int targetY)
{
RenderTexture rt=new RenderTexture(targetX, targetY,24);
RenderTexture.active = rt;
Graphics.Blit(texture2D,rt);
Texture2D result=new Texture2D(targetX,targetY);
result.ReadPixels(new Rect(0,0,targetX,targetY),0,0);
result.Apply();
return result;
}
public void SaveSprite()
{
byte[] bytesToSave = output.EncodeToPNG();
File.WriteAllBytes(Application.persistentDataPath + "/yourTexture1.png",bytesToSave);
}
not necessary but for those of you who didn't understand what is mask in my case
So how to save Texture2D with the rect transform properties of the Image?
I am attempting to stream out the current in game view over a WebRTC connection. My goal is to capture what the user is seeing as RGB 24BPP byte array. I am currently able to stream an empty Texture2D. I would like to populate the empty texture with the OVRCamerRig's current in game view.
I am not a strong Unity developer, but I assumed it might look something like this:
private Texture2D tex;
private RenderTexture rt;
private OVRCameraRig oVRCameraRig;
void Start() {
// I only have 1 camera rig
oVRCameraRig = GameObject.FindObjectOfType<OVRCameraRig>();
tex = new Texture2D(640, 480, TextureFormat.RGB24, false);
rt = new RenderTexture(640, 480, 8, UnityEngine.Experimental.Rendering.GraphicsFormat.R8G8B8_SRGB);
}
public void Update() {
oVRCameraRig.leftEyeCamera.targetTexture = rt;
RenderTexture.active = rt;
tex.ReadPixels(new Rect(0, 0, rt.width, rt.height), 0, 0);
tex.Apply();
}
I have to create 2D map in unity using single image. So, i have one .png file with 5 different pieces out of which I have to create a map & i am not allowed to crop the image. So, how do create this map using only one image.
I am bit new to unity, i tried searching but didn't find exactly what i am looking for. Also tried, tilemap using Pallet but couldn't figure out how to extract only one portion of the image.
You can create various Sprites from the given texture on the fly in code.
You can define which part of a given Texture2D shall be used for the Sprite using Sprite.Create providing the rect in pixel coordinates of the given image. Remember however that in unity texture coordinates start bottom left.
Example use the given pixel coordinate snippet of a texture for the attached UI.Image component:
[RequireComponent(typeof(Image))]
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public Rect pixelCoordinates;
private void Start()
{
var newSprite = Sprite.Create(texture, pixelCoordinates, Vector2.one / 2f);
GetComponent<Image>().sprite = newSprite;
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
pixelCoordinates = new Rect();
return;
}
// reset to valid rect values
pixelCoordinates.x = Mathf.Clamp(pixelCoordinates.x, 0, texture.width);
pixelCoordinates.y = Mathf.Clamp(pixelCoordinates.y, 0, texture.height);
pixelCoordinates.width = Mathf.Clamp(pixelCoordinates.width, 0, pixelCoordinates.x + texture.width);
pixelCoordinates.height = Mathf.Clamp(pixelCoordinates.height, 0, pixelCoordinates.y + texture.height);
}
}
Or you get make a kind of manager class for generating all needed sprites once e.g. in a list like
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public List<Rect> pixelCoordinates = new List<Rect>();
// OUTPUT
public List<Sprite> resultSprites = new List<Sprite>();
private void Start()
{
foreach(var coordinates in pixelCoordinates)
{
var newSprite = Sprite.Create(texture, coordinates, Vector2.one / 2f);
resultSprites.Add(newSprite);
}
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
for(var i = 0; i < pixelCoordinates.Count; i++)
{
pixelCoordinates[i] = new Rect();
}
return;
}
for (var i = 0; i < pixelCoordinates.Count; i++)
{
// reset to valid rect values
var rect = pixelCoordinates[i];
rect.x = Mathf.Clamp(pixelCoordinates[i].x, 0, texture.width);
rect.y = Mathf.Clamp(pixelCoordinates[i].y, 0, texture.height);
rect.width = Mathf.Clamp(pixelCoordinates[i].width, 0, pixelCoordinates[i].x + texture.width);
rect.height = Mathf.Clamp(pixelCoordinates[i].height, 0, pixelCoordinates[i].y + texture.height);
pixelCoordinates[i] = rect;
}
}
}
Example:
I have 4 Image instances and configured them so the pixelCoordinates are:
imageBottomLeft: X=0, Y=0, W=100, H=100
imageTopLeft: X=0, Y=100, W=100, H=100
imageBottomRight: X=100, Y=0, W=100, H=100
imageTopRight: X=100, Y=100, W=100, H=100
The texture I used is 386 x 395 so I'm not using all of it here (just added the frames the Sprites are going to use)
so when hitting Play the following sprites are created:
I am working on a project using ARcore.
I need a real world screen that is visible on the ARcore camera, formerly using the method of erasing UI and capturing.
But it was so slow that I found Frame.CameraImage.Texture in Arcore API
It worked normally in a Unity Editor environment.
But if you build it on your phone and check it out, Texture is null.
Texture2D snap = (Texture2D)Frame.CameraImage.Texture;
What is the reason? maybe CPU problem?
and I tried to do a different function.
public class TestFrameCamera : MonoBehaviour
{
private Texture2D _texture;
private TextureFormat _format = TextureFormat.RGBA32;
// Use this for initialization
void Start()
{
_texture = new Texture2D(Screen.width, Screen.height, _format, false, false);
}
// Update is called once per frame
void Update()
{
using (var image = Frame.CameraImage.AcquireCameraImageBytes())
{
if (!image.IsAvailable) return;
int size = image.Width * image.Height;
byte[] yBuff = new byte[size];
System.Runtime.InteropServices.Marshal.Copy(image.Y, yBuff, 0, size);
_texture.LoadRawTextureData(yBuff);
_texture.Apply();
this.GetComponent<RawImage>().texture = _texture;
}
}
}
But if I change the texture format, it will come out.
private TextureFormat _format = TextureFormat.R8;
it is work, but i don't want to red color image, i want to rgb color
what i should do?
R8 Just red data.
You can use TextureFormat.RGBA32, and set buffer like this:
IntPtr _buff = Marshal.AllocHGlobal(width * height*4);
I'm trying to capture a screenshot of a gameobject in Unity3D Pro so that it has a transparent background. This script was suggested to me and it works, when connected to the main Camera, as long as the material doesn't have a texture. Then I get a semi transparency appearing on the gameobject as shown in this example. http://sta.sh/0iwguk5rx61. Any help with this would be greatly appreciated.
public int resWidth = 2550;
public int resHeight = 3300;
private bool takeHiResShot = false;
public static string ScreenShotName(int width, int height) {
return string.Format("{0}/screen_{1}x{2}_{3}.png",
Application.dataPath,
width, height,
System.DateTime.Now.ToString("yyyy-MM-dd_HH-mm-ss"));
}
public void TakeHiResShot() {
takeHiResShot = true;
}
void LateUpdate() {
takeHiResShot |= Input.GetKeyDown("k");
if (takeHiResShot)
{
RenderTexture rt = new RenderTexture(resWidth, resHeight, 24);
camera.targetTexture = rt;
Texture2D screenShot = new Texture2D(resWidth, resHeight, TextureFormat.ARGB32, false);
camera.Render();
RenderTexture.active = rt;
screenShot.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
camera.targetTexture = null;
RenderTexture.active = null;
Destroy(rt);
byte[] bytes = screenShot.EncodeToPNG();
string filename = ScreenShotName(resWidth, resHeight);
System.IO.File.WriteAllBytes(filename, bytes);
Debug.Log(string.Format("Took screenshot to: {0}", filename));
Application.OpenURL(filename);
takeHiResShot = false;
}
}
sdg
See if this works for you.
using System;
using System.Collections;
using System.Collections.Generic;
using System.IO;
using UnityEngine;
public class Screenshot : MonoBehaviour
{
private void Start()
{
string filename = string.Format("Assets/Screenshots/capture_{0}.png", DateTime.Now.ToString("yyyy-MM-dd_HH-mm-ss-fff"));
if (!Directory.Exists("Assets/Screenshots"))
{
Directory.CreateDirectory("Assets/Screenshots");
}
TakeTransparentScreenshot(Camera.main, Screen.width, Screen.height, filename);
}
public static void TakeTransparentScreenshot(Camera cam, int width, int height, string savePath)
{
// Depending on your render pipeline, this may not work.
var bak_cam_targetTexture = cam.targetTexture;
var bak_cam_clearFlags = cam.clearFlags;
var bak_RenderTexture_active = RenderTexture.active;
var tex_transparent = new Texture2D(width, height, TextureFormat.ARGB32, false);
// Must use 24-bit depth buffer to be able to fill background.
var render_texture = RenderTexture.GetTemporary(width, height, 24, RenderTextureFormat.ARGB32);
var grab_area = new Rect(0, 0, width, height);
RenderTexture.active = render_texture;
cam.targetTexture = render_texture;
cam.clearFlags = CameraClearFlags.SolidColor;
// Simple: use a clear background
cam.backgroundColor = Color.clear;
cam.Render();
tex_transparent.ReadPixels(grab_area, 0, 0);
tex_transparent.Apply();
// Encode the resulting output texture to a byte array then write to the file
byte[] pngShot = ImageConversion.EncodeToPNG(tex_transparent);
File.WriteAllBytes(savePath, pngShot);
cam.clearFlags = bak_cam_clearFlags;
cam.targetTexture = bak_cam_targetTexture;
RenderTexture.active = bak_RenderTexture_active;
RenderTexture.ReleaseTemporary(render_texture);
Texture2D.Destroy(tex_transparent);
}
}
You might have to refresh your assets folder (Ctrl+R) to make Screenshots folder appear in the inspector.
I am Colombian , and my English is not good, I hope you understand me,
I had the same problem and solved it by just changing the TextureFormat ARGB32 to RGB24:
...
RenderTexture rt = new RenderTexture(resWidth, resHeight, 24);
camera.targetTexture = rt;
Texture2D screenShot = new Texture2D(resWidth, resHeight, TextureFormat.RGB24, false);
camera.Render();
RenderTexture.active = rt;
...
I hope to be helpful
see u, :D