Streaming Large Image with pan & Zoom & Annotation - c#

I have large 500MB size image I want to show this image like map with zoom and pan features in ASP.net. I found OpenLayers for this but anyone can share any working example using any framework/library to achieve this functionality in ASP.net

i found one answer which i like to share with you. here is the code
private static void Split(string fileName, int width, int height)
{
using (Bitmap source = new Bitmap(fileName))
{
bool perfectWidth = source.Width % width == 0;
bool perfectHeight = source.Height % height == 0;
int lastWidth = width;
if (!perfectWidth)
{
lastWidth = source.Width - ((source.Width / width) * width);
}
int lastHeight = height;
if (!perfectHeight)
{
lastHeight = source.Height - ((source.Height / height) * height);
}
int widthPartsCount = source.Width / width + (perfectWidth ? 0 : 1);
int heightPartsCount = source.Height / height + (perfectHeight ? 0 : 1);
for (int i = 0; i < widthPartsCount; i++)
for (int j = 0; j < heightPartsCount; j++)
{
int tileWidth = i == widthPartsCount - 1 ? lastWidth : width;
int tileHeight = j == heightPartsCount - 1 ? lastHeight : height;
using (Bitmap tile = new Bitmap(tileWidth, tileHeight))
{
using (Graphics g = Graphics.FromImage(tile))
{
g.DrawImage(source, new Rectangle(0, 0, tile.Width, tile.Height), new Rectangle(i * width, j * height, tile.Width, tile.Height), GraphicsUnit.Pixel);
}
tile.Save(string.Format("{0}-{1}.png", i + 1, j + 1), ImageFormat.Png);
}
}
}
}

I would advise to make some smaller images(Mipmapping http://en.wikipedia.org/wiki/Mipmap ) or/and cut them in smaller parts. (Slice up an image into tiles)
Think about, you can't see all the pixels of 500mb data. Only transfer what you actually see.

Related

how to add gif to another image using C#

I found a library to work with QR Code in C#.
https://github.com/codebude/QRCoder/
There is a sample to add our logo in QR Code generated image.
I want add my gif logo instead of png logo.
The gif is motion.
My question is: How I can add gif to Bitmap or something like that and save it?
In that library, they used Graphics.DrawImage to set icon in center of the generated QR Code Image.
The code they used is:
public Image GetGraphic(
int pixelsPerModule,
Color darkColor,
Color lightColor,
Bitmap icon = null,
int iconSizePercent = 15,
int iconBorderWidth = 0,
bool drawQuietZones = true,
Color? iconBackgroundColor = null)
{
int num1 = (QrCodeData.ModuleMatrix.Count - (drawQuietZones ? 0 : 8)) * pixelsPerModule;
int num2 = drawQuietZones ? 0 : 4 * pixelsPerModule;
Bitmap graphic = new Bitmap(num1, num1, PixelFormat.Format32bppArgb);
using (Graphics graphics = Graphics.FromImage(graphic))
{
using (SolidBrush solidBrush1 = new SolidBrush(lightColor))
{
using (SolidBrush solidBrush2 = new SolidBrush(darkColor))
{
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.CompositingQuality = CompositingQuality.HighQuality;
graphics.Clear(lightColor);
bool flag = icon != null && iconSizePercent > 0 && iconSizePercent <= 100;
for (int index1 = 0; index1 < num1 + num2; index1 += pixelsPerModule)
{
for (int index2 = 0; index2 < num1 + num2; index2 += pixelsPerModule)
{
SolidBrush solidBrush3 = QrCodeData.ModuleMatrix[(index2 + pixelsPerModule) / pixelsPerModule - 1]
[(index1 + pixelsPerModule) / pixelsPerModule - 1] ? solidBrush2 : solidBrush1;
graphics.FillRectangle(solidBrush3, new Rectangle(index1 - num2,
index2 - num2, pixelsPerModule, pixelsPerModule));
}
}
if (flag)
{
float width = (iconSizePercent * graphic.Width) / 100f;
float height = flag ? width * icon.Height / icon.Width : 0.0f;
float x = (float)((graphic.Width - (double)width) / 2.0);
float y = (float)((graphic.Height - (double)height) / 2.0);
RectangleF rect = new RectangleF(x - iconBorderWidth, y - iconBorderWidth,
width + (iconBorderWidth * 2), height + (iconBorderWidth * 2));
RectangleF destRect = new RectangleF(x, y, width, height);
SolidBrush solidBrush4 = iconBackgroundColor.HasValue ? new SolidBrush(iconBackgroundColor.Value) : solidBrush1;
if (iconBorderWidth > 0)
{
using (GraphicsPath roundedRectanglePath = CreateRoundedRectanglePath(rect, iconBorderWidth * 2))
graphics.FillPath(solidBrush4, roundedRectanglePath);
}
graphics.DrawImage(icon, destRect, new RectangleF(0.0f, 0.0f, icon.Width, icon.Height), GraphicsUnit.Pixel);
}
graphics.Save();
}
}
}
return graphic;
}

Unsafe Edge Detection still slow

I'm trying to implement an Image Edge Detection into a WPF program.
I already have it working, but the converting of the image is quite slow.
The code is not using the slow GetPixel and SetPixel functions. But instead I'm looping through the image in some unsafe code so that I can directly access the value's using a pointer.
Before starting the Edge detection I'm also converting the image to a greyscale image to improve the edge detection speed.
But still it takes the program around 1600ms to convert an image with a size of 1920x1440 pixels, which I think could be much faster.
This is the original image:
Which is converted to this (Snapshot of the application):
This is how I'm converting the image, I'm wondering what I can do to get to some other speed improvements?
Loading the image and create a Greyscale WriteableBitmap:
private void imageData_Loaded(object sender, RoutedEventArgs e)
{
if (imageData.Source != null)
{
BitmapSource BitmapSrc = new FormatConvertedBitmap(imageData.Source as BitmapSource, PixelFormats.Gray8 /* Convert to greyscale image */, null, 0);
writeableOriginalBitmap = new WriteableBitmap(BitmapSrc);
writeableBitmap = writeableOriginalBitmap.Clone();
imageData.Source = writeableBitmap;
EdgeDetection();
}
}
Converting the Image:
private const int TOLERANCE = 20;
private void EdgeDetection()
{
DateTime startTime = DateTime.Now; //Save starting time
writeableOriginalBitmap.Lock();
writeableBitmap.Lock();
unsafe
{
byte* pBuffer = (byte*)writeableBitmap.BackBuffer.ToPointer();
byte* pOriginalBuffer = (byte*)writeableOriginalBitmap.BackBuffer.ToPointer();
for (int row = 0; row < writeableOriginalBitmap.PixelHeight; row++)
{
for (int column = 0; column < writeableOriginalBitmap.PixelWidth; column++)
{
byte edgeColor = getEdgeColor(column, row, pOriginalBuffer); //Get pixel color based on edge value
pBuffer[column + (row * writeableBitmap.BackBufferStride)] = (byte)(255 - edgeColor);
}
}
}
//Refresh image
writeableBitmap.AddDirtyRect(new Int32Rect(0, 0, writeableBitmap.PixelWidth, writeableBitmap.PixelHeight));
writeableBitmap.Unlock();
writeableOriginalBitmap.Unlock();
//Calculate converting time
TimeSpan diff = DateTime.Now - startTime;
Debug.WriteLine("Loading Time: " + (int)diff.TotalMilliseconds);
}
private unsafe byte getEdgeColor(int xPos, int yPos, byte* pOriginalBuffer)
{
byte Color;
byte maxColor = 0;
byte minColor = 255;
int difference;
//Calculate max and min value of surrounding pixels
for (int y = yPos - 1; y <= yPos + 1; y++)
{
for (int x = xPos - 1; x <= xPos + 1; x++)
{
if (x >= 0 && x < writeableOriginalBitmap.PixelWidth && y >= 0 && y < writeableOriginalBitmap.PixelHeight)
{
Color = pOriginalBuffer[x + (y * writeableOriginalBitmap.BackBufferStride)];
if (Color > maxColor) //If current pixel has higher value as previous max pixel
maxColor = Color; //Save current pixel value as max
if (Color < minColor) //If current pixel has lower value as previous min pixel
minColor = Color; //Save current pixel value as min
}
}
}
//Difference of minimum and maximum pixel with tollerance
difference = maxColor - minColor - TOLERANCE;
if (difference < 0)
difference = 0;
return (byte)difference;
}
Console Output:
Loading Time: 1599
The following code runs your algorithm on a byte array instead of the BackBuffer of a WriteableBitmap. It completes in less than 300 ms with a 1900x1200 image on my PC.
private static BitmapSource EdgeDetection(BitmapSource source)
{
var stopwatch = Stopwatch.StartNew();
var bitmap = new FormatConvertedBitmap(source, PixelFormats.Gray8, null, 0);
var width = bitmap.PixelWidth;
var height = bitmap.PixelHeight;
var originalBuffer = new byte[width * height];
var buffer = new byte[width * height];
bitmap.CopyPixels(originalBuffer, width, 0);
for (var y = 0; y < height; y++)
{
for (var x = 0; x < width; x++)
{
byte edgeColor = GetEdgeColor(originalBuffer, width, height, x, y);
buffer[width * y + x] = (byte)(255 - edgeColor);
}
}
Debug.WriteLine(stopwatch.ElapsedMilliseconds);
return BitmapSource.Create(
width, height, 96, 96, PixelFormats.Gray8, null, buffer, width);
}
private static byte GetEdgeColor(byte[] buffer, int width, int height, int x, int y)
{
const int tolerance = 20;
byte minColor = 255;
byte maxColor = 0;
var xStart = Math.Max(0, x - 1);
var xEnd = Math.Min(width - 1, x + 1);
var yStart = Math.Max(0, y - 1);
var yEnd = Math.Min(height - 1, y + 1);
for (var j = yStart; j <= yEnd; j++)
{
for (var i = xStart; i <= xEnd; i++)
{
var color = buffer[width * j + i];
minColor = Math.Min(minColor, color);
maxColor = Math.Max(maxColor, color);
}
}
return (byte)Math.Max(0, maxColor - minColor - tolerance);
}

Split an image of multiple digits into seperate images having one digit only

I am trying to split an image of hand written digits into separate ones.
Consider I have this image:
I did a simple logic that could work, but it will and it did encounter a problem:
private static void SplitImages()
{
//We're going to use this code once.. to split our own images into seperate images.. can we do this somehow?
Bitmap testSplitImage = (Bitmap)Bitmap.FromFile("TestSplitImage.jpg");
int[][] imagePixels = new int[testSplitImage.Width][];
for(int i=0;i<imagePixels.Length;i++)
{
imagePixels[i] = new int[testSplitImage.Height];
}
for(int i=0;i<imagePixels.Length;i++)
{
for(int j=0;j<imagePixels[i].Length;j++)
{
Color c = testSplitImage.GetPixel(i, j);
imagePixels[i][j] = (c.R + c.G + c.B) / 3;
}
}
//let's start by getting the first height vector... and count how many of them is white..dunno..
int startColNumber = 0;
int endColNumber = 0;
bool isStart = false;
int imageNumber = 1;
for(int i=0;i<imagePixels.Length;i++)
{
int whiteNumbers = 0;
for(int j=0;j<imagePixels[i].Length;j++)
{
if (imagePixels[i][j] > 200)
{
//consider it white or not really relevant
whiteNumbers++;
}
}
if (whiteNumbers > testSplitImage.Height*95.0/100.0)
{
//let's consider that if a height vector has more than 95% white pixels.. it means that we can start checking for an image
//now if we started checking for the image.. we need to stop
if (isStart)
{
//consider the end of image.. so the end column should be here or we make it +1 at least
endColNumber = i + 1;
isStart = false;
}
}
else
{
if (!isStart)
{
isStart = true; //we will start checking for the image one row before that maybe?
startColNumber = i == 0 ? i : i - 1;
}
}
if (endColNumber > 0)
{
//we got a start and an end.. let's create a new image out of those pixels..hopefully this will work
Bitmap splittedImage = new Bitmap(endColNumber - startColNumber + 1, testSplitImage.Height);
int col = 0;
for(int k=startColNumber;k<=endColNumber;k++)
{
for (int l=0;l<testSplitImage.Height;l++)
{
int c = imagePixels[k][l];
splittedImage.SetPixel(col, l, Color.FromArgb(c, c, c));
}
col++;
}
splittedImage.Save($"Image{imageNumber++}.jpg");
endColNumber = 0;
}
whiteNumbers = 0;
}
}
I did get good results:
I did also get the three zeros:
However, I got this as one image also:
This is one sample of an image that needs to be split (out of 4,000 images mainly), and it's one of the best and easiest one. I am wondering if there's a way to improve my logic, or I should drop this way and find another?
This code only works with monochrome (2 color, black and white) images.
public static class Processor
{
public static byte[] ToArray(this Bitmap bmp) // bitmap to byte array using lockbits
{
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
BitmapData data = bmp.LockBits(rect, ImageLockMode.ReadWrite, bmp.PixelFormat);
IntPtr ptr = data.Scan0;
int numBytes = data.Stride * bmp.Height;
byte[] bytes = new byte[numBytes];
System.Runtime.InteropServices.Marshal.Copy(ptr, bytes, 0, numBytes);
bmp.UnlockBits(data);
return bytes;
}
public static int GetPixel(this byte[] array, int bpr, int x, int y) //find out if the given pixel is 0 or 1
{
int num = y * bpr + x / 8;
return (array[num] >> 7- x%8) & 1;
}
public static List<Point> getDrawingPoints(this Point start, byte[] array, int width, int height) // get one 0 point (black point) and find all adjacent black points by traveling neighbors
{
List<Point> points = new List<Point>();
points.Add(start);
int BytePerRow = array.Length / bmp.Height;
int counter = 0;
do
{
for (int i = Math.Max(0, points[counter].X - 1); i <= Math.Min(width - 1, points[counter].X + 1); i++)
for (int j = Math.Max(0, points[counter].Y - 1); j <= Math.Min(height - 1, points[counter].Y + 1); j++)
if (array.GetPixel(BytePerRow, i, j) == 0 && !points.Any(p => p.X == i && p.Y == j))
points.Add(new Point(i, j));
counter++;
} while (counter < points.Count);
return points;
}
public static Bitmap ToBitmap(this List<Point> points) // convert points to bitmap
{
int startX = points.OrderBy(p => p.X).First().X,
endX = points.OrderByDescending(p => p.X).First().X,
startY = points.OrderBy(p => p.Y).First().Y,
endY = points.OrderByDescending(p => p.Y).First().Y;
Bitmap bmp = new Bitmap(endX - startX + 1, endY - startY + 1);
Graphics g = Graphics.FromImage(bmp);
g.FillRectangle(new SolidBrush(Color.White), new Rectangle(0, 0, endX - startX - 1, endY - startY - 1));
for (int i = startY; i <= endY; i++)
for (int j = startX; j <= endX; j++)
if (points.Any(p => p.X == j && p.Y == i)) bmp.SetPixel(j - startX, i - startY, Color.Black);
return bmp;
}
}
And use it like this to get all numbers inside the main image:
List<Point> processed = new List<Point>();
Bitmap bmp = ((Bitmap)Bitmap.FromFile(SourceBitmapPath));
byte[] array = bmp.ToArray();
int BytePerRow = array.Length / bmp.Height;
int imgIndex = 1;
for (int i = 0; i < bmp.Width; i++)
for (int j = 0; j < bmp.Height; j++)
{
if (array.GetPixel(BytePerRow, i, j) == 0 && !processed.Any(p => p.X == i && p.Y == j))
{
List<Point> points = new Point(i, j).getDrawingPoints(array, bmp.Width, bmp.Height);
processed.AddRange(points);
Bitmap result = points.ToBitmap();
result.Save($"{imgIndex++}.bmp");
}
}
I'm using paint and Save As monochrome bmp format to generate the source image.
I also tested it with this Image:
that result in the following three images:

Draw angled rectangle on bitmap

I have a picture containing text :
I made a method to detect text rows. This method return the 4 corners for the text zone (always sorted) :
I want to modify the bitmap to draw a rectangle (with transparence) from theses 4 corners. Something like this :
I have my image in gray scale. I created a function to draw a rectangle, but I only achieve to draw a right rectangle :
public static void SaveDrawRectangle(int width, int height, Byte[] matrix, int dpi, System.Drawing.Point[] corners, string path)
{
System.Windows.Media.Imaging.WriteableBitmap wbm = new System.Windows.Media.Imaging.WriteableBitmap(width, height, dpi, dpi, System.Windows.Media.PixelFormats.Bgra32, null);
uint[] pixels = new uint[width * height];
for (int Y = 0; Y < height; Y++)
{
for (int X = 0; X < width; X++)
{
byte pixel = matrix[Y * width + X];
int red = pixel;
int green = pixel;
int blue = pixel;
int alpha = 255;
if (X >= corners[0].X && X <= corners[1].X &&
Y >= corners[0].Y && Y <= corners[3].Y)
{
red = 255;
alpha = 255;
}
pixels[Y * width + X] = (uint)((alpha << 24) + (red << 16) + (green << 8) + blue);
}
}
wbm.WritePixels(new System.Windows.Int32Rect(0, 0, width, height), pixels, width * 4, 0);
using (FileStream stream5 = new FileStream(path, FileMode.Create))
{
PngBitmapEncoder encoder5 = new PngBitmapEncoder();
encoder5.Frames.Add(BitmapFrame.Create(wbm));
encoder5.Save(stream5);
}
}
How can I draw a rectangle from 4 corners ?
I modify my condition by replacing with that code:
public static void SaveDrawRectangle(int width, int height, Byte[] matrix, int dpi, List<Point> corners, string path)
{
System.Windows.Media.Imaging.WriteableBitmap wbm = new System.Windows.Media.Imaging.WriteableBitmap(width, height, dpi, dpi, System.Windows.Media.PixelFormats.Bgra32, null);
uint[] pixels = new uint[width * height];
for (int Y = 0; Y < height; Y++)
{
for (int X = 0; X < width; X++)
{
byte pixel = matrix[Y * width + X];
int red = pixel;
int green = pixel;
int blue = pixel;
int alpha = 255;
if (IsInRectangle(X, Y, corners))
{
red = 255;
}
pixels[Y * width + X] = (uint)((alpha << 24) + (red << 16) + (green << 8) + blue);
}
}
wbm.WritePixels(new System.Windows.Int32Rect(0, 0, width, height), pixels, width * 4, 0);
using (FileStream stream5 = new FileStream(path, FileMode.Create))
{
PngBitmapEncoder encoder5 = new PngBitmapEncoder();
encoder5.Frames.Add(BitmapFrame.Create(wbm));
encoder5.Save(stream5);
}
}
public static bool IsInRectangle(int X, int Y, List<Point> corners)
{
Point p1, p2;
bool inside = false;
if (corners.Count < 3)
{
return inside;
}
var oldPoint = new Point(
corners[corners.Count - 1].X, corners[corners.Count - 1].Y);
for (int i = 0; i < corners.Count; i++)
{
var newPoint = new Point(corners[i].X, corners[i].Y);
if (newPoint.X > oldPoint.X)
{
p1 = oldPoint;
p2 = newPoint;
}
else
{
p1 = newPoint;
p2 = oldPoint;
}
if ((newPoint.X < X) == (X <= oldPoint.X)
&& (Y - (long)p1.Y) * (p2.X - p1.X)
< (p2.Y - (long)p1.Y) * (X - p1.X))
{
inside = !inside;
}
oldPoint = newPoint;
}
return inside;
}
It works but have 2 failings :
generated images are very big (base image take 6 Mo and after drawing 25 Mo)
generation take several time (my images are 5000x7000 pixels, process take 10 seconds)
There is probably a better way, but this way is working good.

Correctly executing bicubic resampling

I've been experimenting with the image bicubic resampling algorithm present in the AForge framework with the idea of introducing something similar into my image processing solution. See the original algorithm here and interpolation kernel here
Unfortunately I've hit a wall. It looks to me like somehow I am calculating the sample destination position incorrectly, probably due to the algorithm being designed for Format24bppRgb images where as I am using a Format32bppPArgb format.
Here's my code:
public Bitmap Resize(Bitmap source, int width, int height)
{
int sourceWidth = source.Width;
int sourceHeight = source.Height;
Bitmap destination = new Bitmap(width, height, PixelFormat.Format32bppPArgb);
destination.SetResolution(source.HorizontalResolution, source.VerticalResolution);
using (FastBitmap sourceBitmap = new FastBitmap(source))
{
using (FastBitmap destinationBitmap = new FastBitmap(destination))
{
double heightFactor = sourceWidth / (double)width;
double widthFactor = sourceHeight / (double)height;
// Coordinates of source points
double ox, oy, dx, dy, k1, k2;
int ox1, oy1, ox2, oy2;
// Width and height decreased by 1
int maxHeight = height - 1;
int maxWidth = width - 1;
for (int y = 0; y < height; y++)
{
// Y coordinates
oy = (y * widthFactor) - 0.5;
oy1 = (int)oy;
dy = oy - oy1;
for (int x = 0; x < width; x++)
{
// X coordinates
ox = (x * heightFactor) - 0.5f;
ox1 = (int)ox;
dx = ox - ox1;
// Destination color components
double r = 0;
double g = 0;
double b = 0;
double a = 0;
for (int n = -1; n < 3; n++)
{
// Get Y cooefficient
k1 = Interpolation.BiCubicKernel(dy - n);
oy2 = oy1 + n;
if (oy2 < 0)
{
oy2 = 0;
}
if (oy2 > maxHeight)
{
oy2 = maxHeight;
}
for (int m = -1; m < 3; m++)
{
// Get X cooefficient
k2 = k1 * Interpolation.BiCubicKernel(m - dx);
ox2 = ox1 + m;
if (ox2 < 0)
{
ox2 = 0;
}
if (ox2 > maxWidth)
{
ox2 = maxWidth;
}
Color color = sourceBitmap.GetPixel(ox2, oy2);
r += k2 * color.R;
g += k2 * color.G;
b += k2 * color.B;
a += k2 * color.A;
}
}
destinationBitmap.SetPixel(
x,
y,
Color.FromArgb(a.ToByte(), r.ToByte(), g.ToByte(), b.ToByte()));
}
}
}
}
source.Dispose();
return destination;
}
And the kernel which should represent the given equation on Wikipedia
public static double BiCubicKernel(double x)
{
if (x < 0)
{
x = -x;
}
double bicubicCoef = 0;
if (x <= 1)
{
bicubicCoef = (1.5 * x - 2.5) * x * x + 1;
}
else if (x < 2)
{
bicubicCoef = ((-0.5 * x + 2.5) * x - 4) * x + 2;
}
return bicubicCoef;
}
Here's the original image at 500px x 667px.
And the image resized to 400px x 543px.
Visually it appears that the image is over reduced and then the same pixels are repeatedly applied once we hit a particular point.
Can anyone give me some pointers here to solve this?
Note FastBitmap is a wrapper for Bitmap that uses LockBits to manipulate pixels in memory. It works well with everything else I apply it to.
Edit
As per request here's the methods involved in ToByte
public static byte ToByte(this double value)
{
return Convert.ToByte(ImageMaths.Clamp(value, 0, 255));
}
public static T Clamp<T>(T value, T min, T max) where T : IComparable<T>
{
if (value.CompareTo(min) < 0)
{
return min;
}
if (value.CompareTo(max) > 0)
{
return max;
}
return value;
}
You are limiting your ox2 and oy2 to destination image dimensions, instead of source dimensions.
Change this:
// Width and height decreased by 1
int maxHeight = height - 1;
int maxWidth = width - 1;
to this:
// Width and height decreased by 1
int maxHeight = sourceHeight - 1;
int maxWidth = sourceWidth - 1;
Well, I've met a very strange thing, which might be or might be not a souce of the problem.
I've started to try implementing convolution matrix by myself and encountered strange behaviour. I was testing code on a small image 4x4 pixels. The code is following:
var source = Bitmap.FromFile(#"C:\Users\Public\Pictures\Sample Pictures\Безымянный.png");
using (FastBitmap sourceBitmap = new FastBitmap(source))
{
for (int TY = 0; TY < 4; TY++)
{
for (int TX = 0; TX < 4; TX++)
{
Color color = sourceBitmap.GetPixel(TX, TY);
Console.Write(color.B.ToString().PadLeft(5));
}
Console.WriteLine();
}
}
Althought I'm printing out only blue channel value, it's still clearly incorrect.
On the other hand, your solution partitially works, what makes the thing I've found kind of irrelevant. One more guess I have: what is your system's DPI?
From what I have found helpfull, here are some links:
C++ implementation of bicubic interpolation on
matrix
C# implemetation of bicubic interpolation, lacking the part about rescaling
Thread on gamedev.net which has almost working solution
That's my answer so far, but I will try further.

Categories

Resources