Hook DirectX to make objects in scene transpanent - c#

I am good at C# programming and I want to know if is possible to make objects from another DirectX program about fifty persents tranpanent, so I can see behind them? I will be happy if you provide some working example or link to some tutorial to take me to the right direction.
I don´t care about DirectX version but it will be great if it work on version 9 and up.
I don´t know anything about patching DLLs, so it will be great if it only use some C# hooks without dll editing.
If it isn´t possible with C# it is possible with C++?

It is possible with both C# and C++ Although I prefer c++ for such things, I can recommend C# hook solutions like EasyHooks: https://easyhook.github.io/. With DX9 you might need to hook BeginScene function and apply transparency state, but iut might be too early. You have to do a lot of test. You might also need to modify the object's vertex declarations prior to Present (dx10/11) or EndScene (dx9). IT depends on how the objects are drawn in each application.
You can also check Direct3DHook on GitHub: https://github.com/spazzarama/Direct3DHook

You can look at #VuVirt answer for the hooking portion, but the challenge reside in the translucency portion, here a little list of the things you may have to consider depending on the application you want to hook to:
Depth testing : If objects are not render back to front, you have to disable depth testing to see them.
Depth Prepass : If an engine do a pre-pass to fill the depth, you won't be able again to see thought unless you disable that pass fully.
Translucency need a back to front order to look correct.
Deferred rendering : Opaque are rendered in multiple render-target and lit in screen space, tweaking the translucency here would be meaningless and pure wrong.
Occlusion culling : If an application do occlusion culling, the object you wanna see may not even be sent to DirectX.
Various post process may need to be opaque to copy surface around.
Way many other obstacles in the way.

Related

Should entities have the capability to draw themselves? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
A bit of a simple question but it's one I'm never too sure of. This is mostly in context of game development. Let's say you had an entity, like a ship or a character. Should (and I know this is a matter of opinion) this object contain a Draw method? (perhaps implement an IDrawable interface)
When looking at source code in some game projects I see this done in very many ways. Sometimes there is a separate class that may take in an Entity base object and draw it, given its properties (texture, location, et cetera). Other times the entity is given its own draw method and they are able to render themselves.
I hope this question is not too subjective.. Sometimes I hear people say things like drawing should be completely separate from objects and they should not know how to draw themselves or have the capability to. Some are more pragmatic and feel this kind of design where objects have this capability are fine.
I'm not too sure myself. I didn't find any similar questions to this because it may be a bit subjective or either method is fine but I'd like to hear what SO has to say about it because these kinds of design decisions plague my development daily. These small decisions like whether object A should have the capability to perform function B or whether or not that would violate certain design principles.
I'm sure some will say to just go with my gut and refactor later if necessary but that's what I have been doing and now I'd like to hear the actual theory behind deciding when certain objects should maintain certain capabilities.
I suppose what I am looking for are some heuristics for determining how much authority a given object should have.
You should go with whatever makes your implementation the easiest. Even if it turns out you made the wrong choice, you first hand experience on why one method is better than the other and can apply that knowledge in the future. This is valuable knowledge that will help you make decisions later. I find that one of the best ways I learn the merits of something is to do it a wrong a few times (not on purpose mind you) so you can get a good understanding of the pitfalls of an approach.
The way I handle it is this: An entity has all the information on it that is required for it to be drawn. e.g. The sprites that make it up, where they are located relative to the center of the entity. The entity itself does very little, if anything at all. It's actually just a place to store information that other systems operate on.
The world rendering code handles drawing the world as well as all the entities in it. It looks at a given entity and draws each of its sprites in the right spot, applying any necessary camera transformations or effects or what-have-you.
This is how most of my game works. The entity has no idea about physics either, or anything. All it is is a collection of data, which other systems operate on. It doesn't know about physics, but it does have some Box2D structures that hang off of it which Box2D operates on during the physics updating phase. Same with AI or Input and everything else.
Each method has it's pros and cons.
Advantage of letting objects draw themselves:
It is more comfortable for both parties. the person writing the engine just has to call a function, and the person using it, writing these IDrawable classes has low level access to everything they need. If you wanted to make a system where each object has to define what it looks like which shaders and textures it will use, and how to apply any special effects. You will either have a pretty complicated system, or a very limited one.
Advantage of having a renderer manage and draw objects
All modern 3D game engines do it this way, and for a very good reason. If the renderer has full control over which textures are bound, which models to use, and which shaders are active, it can optimize. Say you had a lot of items rendering with the same shader. The renderer can set the shader once and render them all in one batch! You can't do this if each object draws itself.
Conclusion
For small projects, its fine to have each object draw itself. It's more comfortable in fact.
If you really want performance and want to squeeze everything out of your graphics card, you need a renderer that has strict control over what is rendered and how, so that it can optimize.
It depends what you mean by 'draw itself'. If you mean should it contain low-level routines involving pixels or triangles, then probably not.
My suggestion:
One class to represent the behavior of the object (movement, AI, whatever)
One class to represent the appearance of the object
One class to define an object as combination of an appearance and a behavior
If the 'appearance' involves some custom drawing routine, that could be in that class. On the whole though drawing would be abstracted from underlying API perhaps and either using the strategy, visitor or some IoC pattern the object would be drawn by some rendering manager. This is especially true in game design where things like swapping textures in/out of video memory and drawing things in the correct order is important; something needs to be able to optimize things above the object level.
To try to be more specific, one part of your object's graph (object itself or appearance subdivision) implements IRenderable, has a method Draw(IRenderEngine), and that IRenderEngine gives the IRenderable access to methods like DrawSprite, CreateParticleEffect or whatever. Optimising or measuring your engine and algorithms will be far easier this way, and if you need to swap out engines or re-write in an unmanaged, high-performance way (or port to another platform), just create a new implementation of IRenderEngine.
Hint: if you do this, you can have many things that look the same/similar but act differently by swapping out the behavior, or lots of things that look different but act the same. You can also have the behavior stuff happen on a server, and the appearance stuff happen on a client.
I've done something very similar to this before. Each class of object had a base class that contained properties used on the client, but on the server it was a derived instance of that class that was instantiated, and it included the behavior (movement, simulation, etc). The class also had a factory method for providing its associated IRenderable (appearance) interface. Actually had a pretty effective multiplayer game engine in the end with that approach.
I suppose what I am looking for are some heuristics for determining how
much authority a given object should have.
I'd replace authority with responsibility and then the answer is ONE. Responsibility both for the behaviour and drawing of a ship seems too much to me.

Creating a DSP system from scratch

I love electronic music and I am interested in how it all ticks.
I've found lots of helpful questions on Stack Overflow on libraries that can be used to play with audio, filters etc. But what I am really curious about is what is actually hapening: how is the data being passed between effects and oscillators? I have done research into the mathematical side of dsp and I've got that end of the problem sussed but I am unsure what buffering system to use etc. The final goal is to have a simple object heirarchy of effects and oscillators that pass the data between each other (maybe using multithreading if I don't end up pulling out all my hair trying to implement it). It's not going to be the next Propellerhead Reason but I am interested in how it all works and this is more of an exercise than something that will yeild an end product.
At the moment I use .net and C# and I have recently learnt F# (which may or may not lead to some interesting ways of handling the data) but if these are not suitable for the job I can learn another system if necessary.
The question is: what is the best way to get the large amounts of signal data through the program using buffers? For instance would I be better off using a Queue, Array,Linked List etc? Should I make the samples immutable and create a new set of data each time I apply an effect to the system or just edit the values in the buffer? Shoud I have a dispatcher/thread pool style object that organises passing data or should the effect functions pass data directly between each other?
Thanks.
EDIT: another related question is how would I then use the windows API to play this array? I don't really want to use DirectShow because Microsoft has pretty much left it to die now
EDIT2: thanks for all the answers. After looking at all the technologies I will either use XNA 4(I spent a while trawling the internet and found this site which explains how to do it) or NAudio to output the music... not sure which one yet, depends on how advanced the system ends up being. When C# 5.0 comes out I will use its async capabilities to create an effects architecture on top of that. I've pretty much used everybody's answer equally so now I have a conundrum of who to give the bounty to...
Have you looked at VST.NET (http://vstnet.codeplex.com/)? It's a library to write VST using C# and it has some examples. You can also consider writing a VST, so that your code can be used from any host application (but even if you don't want, looking at their code can be useful).
Signal data is usually big and requires a lot of processing. Do not use a linked list! Most libraries I know simply use an array to put all the audio data (after all, that's what the sound card expect).
From a VST.NET sample:
public override void Process(VstAudioBuffer[] inChannels, VstAudioBuffer[] outChannels)
{
VstAudioBuffer audioChannel = outChannels[0];
for (int n = 0; n < audioChannel.SampleCount; n++)
{
audioChannel[n] = Delay.ProcessSample(inChannels[0][n]);
}
}
The audioChannel is a wrapper around an unmanaged float* buffer.
You probably store your samples in an immutable array. Then, when you want to play them, you copy the data in the output buffer (change the frequency if you want) and perform effects in this buffer. Note you can use several output buffers (or channels) and sum them at the end.
Edit
I know two low-level ways to play your array: DirectSound and WaveOut from Windows API. C# Example using DirectSound. C# example with WaveOut. However, you might prefer use an external higher-level library, like NAudio. NAudio is convenient for .NET audio manipulation - see this blog post for sending a sine wave to the audio card. You can see they are also using an array of float, which is what I recommend (if you do your computations using bytes, you'll end up with a lot of aliasing in the sound).
F# is probably a good choice here, as it's well fitted to manipulate functions. Functions are probably good building blocks for signal creation and processing.
F# is also good at manipulating collections in general, and arrays in particular, thanks to the higher-order functions in the Array module.
These qualities make F# popular in the finance sector and are also useful for signal processing, I would guess.
Visual F# 2010 for Technical Computing has a section dedicated to Fourier Transform, which could be relevant to what you want to do. I guess there is plenty of free information about the transform on the net, though.
Finally, to play samples, you can use XNA. I think the latest version of the API (4.0) also allows recording, but I have never used that. There is a famous music editing app for the Xbox called ezmuse+ Hamst3r Edition that uses XNA, so it's definitely possible.
With respect to buffering and asynchrony/threading/synchronization issues I suggest you to take a look at the new TPL Data Flow library. With its block primitives, concurrent data structures, data flow networks, async message prcessing, and TPL's Task based abstraction (that can be used with the async/await C# 5 features), it's a very good fit for this type of applications.
I don't know if this is really what you're looking for, but this was one of my personal projects while in college. I didn't truly understand how sound and DSP worked until I implemented it myself. I was trying to get as close to the speaker as possible, so I did it using only libsndfile, to handle the file format intricacies for me.
Basically, my first project was to create a large array of doubles, fill it with a sine wave, then use sf_writef_double() to write that array to a file to create something that I could play, and see the result in a waveform editor.
Next, I added another function in between the sine call, and the write call, to add an effect.
This way you start playing with very low-level oscillators and effects, and you can see the results immediately. Plus, it's very little code to get something like this working.
Personally, I would start with the simplest possible solution you can, then slowly add on. Try just writing out to a file and using your audio player to play it, so you don't have to deal with the audio apis. Just use a single array to start, and modify-in-place. Definitely start off single-threaded. As your project grows, you can start moving to other solutions, like pipes instead of the array, multi-threading it, or working with the audio API.
If you're wanting to create a project you can ship, depending on exactly what it is, you'll probably have to move to more complex libraries, like some real-time audio processing. But the basics you learn by doing the simple way above will definitely help when you get to this point.
Good luck!
I've done quite a bit of real-time DSP, although not with audio. While either of your ideas (immutable buffer) vs (mutable buffer modified in place) could work, what I prefer to do is create a single permanent buffer for each link in the signal path. Most effects don't lend themselves well to modification in place, since each input sample affects multiple output samples. The buffer-for-each-link technique works especially well when you have resampling stages.
Here, when samples arrive, the first buffer is overwritten. Then the first filter reads the new data from its input buffer (the first buffer) and writes to its output (the second buffer). Then it invokes the second stage to read from the second buffer and write into the third.
This pattern completely eliminates dynamic allocation, allows each stage to keep a variable amount of history (since effects need some memory), and is very flexible as far as enabling rearranging the filters in the path.
Alright, I'll have a stab at the bounty as well then :)
I'm actually in a very similar situation. I've been making electronic music for ages, but only over the past couple of years I've started exploring actual audio processing.
You mention that you have researched the maths. I think that's crucial. I'm currently fighting my way through Ken Steiglitz' A Digital Signal Processing Primer - With Applications to Digital Audio and Computer Music. If you don't know your complex numbers and phasors it's going to be very difficult.
I'm a Linux guy so I've started writing LADSPA plugins in C. I think it's good to start at that basic level, to really understand what's going on. If I was on Windows I'd download the VST SDK from Steinberg and write a quick proof of concept plugin that just adds noise or whatever.
Another benefit of choosing a framework like VST or LADSPA is that you can immediately use your plugins in your normal audio suite. The satisfaction of applying your first home-built plugin to an audio track is unbeatable. Plus, you will be able to share your plugins with other musicians.
There are probably ways to do this in C#/F#, but I would recommend C++ if you plan to write VST plugins, just to avoid any unnecessary overhead. That seems to be the industry standard.
In terms of buffering, I've been using circular buffers (a good article here: http://www.dspguide.com/ch28/2.htm). A good exercise is to implement a finite response filter (what Steiglitz refers to as a feedforward filter) - these rely on buffering and are quite fun to play around with.
I've got a repo on Github with a few very basic LADSPA plugins. The architectural difference aside, they could potentially be useful for someone writing VST plugins as well. https://github.com/andreasjansson/my_ladspa_plugins
Another good source of example code is the CSound project. There's tonnes of DSP code in there, and the software is aimed primarily at musicians.
Start with reading this and this.
This will give you idea on WHAT you have to do.
Then, learn DirectShow architecture - and learn HOW not to do it, but try to create your simplified version of it.
You could have a look at BYOND. It is an environment for programmatic audio / midi instrument and effect creation in C#. It is available as standalone and as VST instru and effect.
FULL DISCLOSURE I am the developer of BYOND.

How do I visualize a complex graph in .Net?

I need to visualize a graph. I don't know how to name it (by the way, if you know - I'll appreciate if you tell me). It would be ideal for graph elements to be clickable (so that when user clicks on a block, I can handle an event with the element id specified) but I can survive even without any interactivity. I may like to be able to focus on a particular node and layout all others to view from its perspective. Are there any components available good for this task? If no, what should I look for to help me to develop an algorithm for drawing such a graph with visually-comfortable layout?
Practical nature of this graph is pretty common: each block represents a derivation from 2 operands. Orange circles are references to 2 operands, green circles are connection points to consumers. It can be significant to distinguish an operand position (left or right), for example if a derivation represents a mathematical operation of difference or division (in this particular case a block can be triangular, but in other cases an operand itself can make use of being aware of for what blocks is it a left operand and for what blocks is it a right). Another common application is intersecting sets with complex relations.
You could take a look at Graph#, but I'm not sure how well it'll handle composite nodes like that. It could be a good starting point though.
I also would like to point you to graphviz. It is not a .NET solution but you can feed it files that are easy enough to write in order to create graphs. I don't think layouting is a very simple thing to do, especially with increasing node count, so it should be a good thing to find some tool for that.
As It seems Microsoft itself has done a really good job on graph visualization called automatic-graph-layout.
Here's the link https://github.com/microsoft/automatic-graph-layout.
Well, you first need to represent it somehow in memory, there are many ways, like adjacency list. Then you need to draw it. While generally drawing a graph is simple, it's not that simple if you need to layout it. Looks like in your case, that's exactly what you need to do to come to that nice representation. It ain't going to be easy.
EDIT: Interesting, there seems to be a library made by Microsoft Research.
I don't know how useful it will be in this particular scenario, but you might want to take a look at http://quickgraph.codeplex.com/
Graphviz4Net provides WPF component for graphs visualization. It depends on GraphViz (open source command line graph visualization tool).
I can not find this component and i decided writen my own control, line and head, and use them to visualization my graph's
If the needed your i can give it's component and program to demonstrate him/
I writen component and program's in visual studio 2008 language C#
This is a fairly new and maintained .NET wrapper for Graphviz: https://github.com/Rubjerg/Graphviz.NetWrapper
(Disclaimer: I'm the author)
This wrapper works differently from other wrappers, since it makes direct function calls to the native Graphviz code. This means you can not only programatically construct your graph in C# code, but also read the layout attributes back out in C# code and render it any way you want. The latter sounds specifically like something you would be interested in.
A quite good looking one is the Diagram tool from Nevron. But it's not for free!
I'm currently using the charts and user interfaces from them and they work quite good.
Have used this commercial product with success: GoDiagram
It support the multiple ports on the nodes like you have shown.

Patterns: Render engine allowing editor integration?

I am trying to determine that if (before I start) on a new project I can pick some suitable patterns which will help with development as the project gets more complicated.
The Scenario
To have an application that draws 'simple' lines on the screen. Ideally encompassed into a 'Render Engine' which I can package into Silverlight, WPF demo applications etc.
I also require editor application that uses the render engine to do the bulk of the displaying, however provides additional functionality like control points for moving the lines about the screen & Dialogs for changing the colours of the lines etc.
The Goal
To keep the render engine specalised and efficient. The editor should be able to 'inject' the additional functionality (i.e. display of control points) into the objects used by the rendering engine, or into the render engine itself. I don't want to code any editor specific code into the render engine.
My thoughts so far
I'm thinking of using an encapsulation/template pattern for the objects that will be used by the rendering engine, somehow allowing the editor application to supply a class to the object which 'tacks on' the functionality for the control points (and for example the event handling for moving of the control points).
My reason behind liking this idea is that the rendering engine never need know about the environment in which it is working. The editor can be changed extensively without ever having to change the rendering engine (hopefully).
But....
I may be wrong, if anyone can see any pitfalls, or has experience of better ways to tackling this problem I would love to hear them!
I agree with Charlie that you should start with a simple design prototype and extend it as needed (that's how I started with my map rendering engine). I can give you a few suggestions though:
Separate the drawing engine from the rest of the code (here's an example how). By the drawing engine I mean the code that actually draws something on the screen/bitmap. It should consume drawing primitives, not some higher-level entities. This way you'll be able to switch the drawing engine easily (example: SVG, PDF, GDI, WPF...)
If you want to have an editor, one of the patterns that are useful is the State pattern
Well shoot, that's an impressive amount of forethought.
My philosophy has always been, use a design pattern when you need to use one. Otherwise you may become an architecture astronaut, designing grand schemes all for naught. It's good that you're thinking about design before development, but really, how much can you possibly know about the project before any code has been written? Just about nothing. And if you force yourself into a pattern before any code has been written, you may end up jamming a square peg into a round hole for the entire lifecycle of the application.
My advice to you: write a prototype first. Quick and dirty. No real design; just make a skeleton that walks. Learn from it. If you thought of a better way, scrap the original and redesign a new one. Use design patterns that make sense as you add functionality, not for the sake of adding functionality.

What Should Be in a 2D Game Engine? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Ok, so I ended up writing my own game engine based on top of XNA, and I am just wondering what else I need to make a complete engine.
This is what's in the engine:
Physics (Farseer Physics)
Particle Engine (Mercury Project)
2D Cameras
Input Handling
Screen Management (Menus, Pause Screen, etc.)
Sprite ( Animation, Sprite Sheets)
And XNA stuff like Sound.
Am I missing anything that might be crucial to a game engine?
You're approaching it in an upside-down manner.
What should be in your engine is the following:
All the code that turned out to be common between your first and your second game.
First, write a game. Don't write an engine, because, as you have found out, you don't know what it should contain, or how it should be designed. Write a game instead.
Once you have that game, write another game. Then, when you have done that, examine the second game's code. How much of it was reused from the first game?
Anything that was reused should then be refactored out into a separate project. That project is now your game engine.
This doesn't mean that you shouldn't plan, or shouldn't try to write nice code. But make sure you're working towards something concrete, something where you can tell a good implementation from a bad one. A good implementation of a game, is one that works, is fun, and doesn't crash. Write your code to achieve those things first.
A good implementation of an engine? That's trickier. What's a good implementation of a renderer? Of an AI framework? Of a particle system? Ultimately, the only way to determine whether you have a good engine is by seeing how well it works in an actual game. So if you don't have a game, you have no way to evaluate your engine. And if you have no way to evaluate your engine, you have no way of judging whether any of the code you're writing is actually useful.
A theme or market for your engine. If you're doing anything beyond basic your basic graphics engine, you'll want to concentrate on a market for your engine, like RPG, strategy, puzzle, platformer, action, or FPS (ok, not FPS).
This will help you point yourself in the direction you need to go in order to make further enhancements to the engine without asking us. An engine like say, the Unreal Engine, can do multiple things, but what it tends to do best is what it's made for, FPS games. Likewise, you should tailor your engine so that it suits a particular field of interest, and therefore is picked up for that type of gameplay.
You can make it general to a point, but realize the more generalized your engine is, the harder it is to program, both time wise and skill wise. Other programmers are also less likely to pick up a general engine (unless that's all there is) if a more specific platform is available. Or to just write their own since modifying a generalized engine is about as hard as creating your own.
A few more things:
Path finding - very useful for AI
AI - possibly - depends on how generic you want the engine to be.
High scores
Replays - makes high scores much more interesting, as you can actually watch them.
A couple ideas:
Artificial intelligence (perhaps just simple AI utilities, like pathfinding algorithms)
Saving all or part of the game state (for suspending and restarting at a later time or saving high scores).
I think that you covered the general requirements of a 2D engine. The only thing I would miss in that list would be:
GUI Library
Also to make development processes easier:
Script Engine (LUA, C#Script, ...)
Dynamically Refreshed Assets (see Nick Gravelyn's Blog Entry)
You might also add another layer on top of XNA's existing stuff:
A quite bareboned Network/Lobby implementation
More abstract handling of multiple controllers (DropIn/DropOut during gaming sessions, like see Resident Evil 5 Coop) - maybe event-based
Finally you might add some "ready2use" shaders. Maybe get some inspiration from the discontinued FaceWound (from the "Garry's Mod" developer).
It depends on the game, but another thing often needed is a good networking framework.
Many modern games, including 2D games, seem to have some form of networking in place.
Animation framework so that you can say: take this sprite, move it in this direction, folowing this path using this speed, acceleration and such
Basic GUI system. Don't implement a whole Windows, but basic things like a pointer and a button, and such - keep it basic
Debugging component for displaying FPS, numbers of sprites and such
Also a good thing is to make some games, and then you will quicky see what things you repeat doing for each game, and then look into how to can get that into the engine.
2d lighting system is a good advanced feature. You can use Krypton for that.
Map editor. Or even better support any tile-map format, compatible with "Tiled Map Editor". So you can just use Tiled.
Scheduler/Timer for game actions.
Screen wipe effects such as page turn, fade out, etc. to make nice transitions between screens.
Game layers with build in parallax would be usefull too.
In game console to process commands or scripts without restart.
Easy load for texture atlas as sprites or animation.
Nice Work,
We (me and some of my friends) are working on a game engine too actually,
We've already got all what you mentioned but moreover we've got the following.
audio manager : a simple class to handle background music and sounds
effects in XNA.
video manager : it's not complicated, a simple class to handle video
playing in XNA.
effects manager : is responsible for stuff like bloom, blur,
black/white colors .. etc.
Good Luck :)
3d Acceleration should be in a 2D engine.
Using the 3d hardware that most people have these days is the best way to get amazing performance for your 2D games...
Good collision detection is very helpful. If you implement it efficiently, it really reduces the time required for every frame. Besides that, in my engine (for Pygame) I have a method of separating the main screen into a number of subscreens, which I find useful.
Depending on the target game type, include Navigation Graph(s) with node and edge annotations. (Good for many games, but not so much for the token side scrollers that are made with 2D graphics engines)
A component to generate them (via a flood fill algorithm).
Be sure to include all of the major path finding/planning algorithms (A*/Dijkstra/etc.) to traverse those graphs.
The pitfall of this is that you will have to define what a 'map' is for the engine, which might limit users of the engine.
Related things:
Location based triggers (player enters an invisible circle and something happens - queue cutscene, start ambush, etc.). I would say provide a base class for the trigger and implement some basic ones to show how it's done (ie. weapon pickups etc.)
Some game engines implement networking (though this is kind of part of the 'xna stuff')
The most useful thing to include above all else would be tools to easily use your engine. Maybe use your engine to create the tools. I'm sure you would find a lot of flaws that way.
Simple pixelperfect collision detection. NOT Farseer Physics. Simple drawing routines like drawline, drawcircle etc.
A tile repeat tool. Something that allows the user to add/create a tile, and manipulate the edges for a smooth repeath pattern.
Um. This list is an "internals" list. To make great engine is to make great "external" list.
Look at UE3 for example -- it is here because of great tools. You need tools for world creation, to create optimal packages of resources (it should be in internal list too ;-)), for collision object specification etc.
Plus, to add to Organiccat answer you should decide on tech level. You can go for simple sprites or you can want fancy effects (so shaders are needed, and with this you need infrastructure)

Categories

Resources