Projectview NPR render


#1

Hereby my view upon the way to go as discussed in this mornings meet up in G+.

  1. It is effident that with the current funding we shall not be able to get the whole project of the floor in one go. There is an alternative that could give us the funding we need.
  2. We seriously lack in coding experience and have to build this up gradually.
  3. We still not 100% sure if the OpenGL is the absolute way to go and what problems we might find following down this trail.

Thinking about all three obstacles I propose that the following route would be the most viable option:
A) We gonna build a render engine outside Blender which is based on Opengl (suggestion TK). I believe there are several examples but do not know for sure.

  1. The main reason is that doing this we can show results very quickly, let’s say the goal is one year.
  2. It should be as less complicated as is possible. So simple shaders, simple ui, simple coding, the goal is just showing something to get more people on board and get more support and funding (see B).
  3. Understanding how to export scenes from Blender to an external renderer is the first step. We should understand how to copy luxrender or other software’s code. In fact we could call for help for those party’s and explain them what we are doing. Mastika is my no 1 choice because that is supported by a university, just like JOT. If there are no commercial angles we are free to move whatever way we want.
  4. For building the renderengine we have to make choices as well. TK suggested python which would be a wise choise if we want to incorporate pieces of code later into Blender. I believe we still will need a fair part of C/C++. Biggest challenge is afcourse building a scene for opengl to render.
  5. For the UI we stick to what others have build before us and copy it.
  6. Coding for shaders can be already done.
  7. Drawbacks are there as well.
    a) We are far from what we actually want like the Viewport.
    b) We have to get in touch with the Viewport FX to discuss about the current opengl rendering, if by miracle they incorporate this or make stuff easier, we could have wasted valuable time.
    c) what about freestyle?

B) If we are done with our simple renderer we can do several options:

  1. We can expand the current external renderer.
  2. We can move to integration in Blender.
    So why 1 or 2. Well going 1 could help us get the project better funded. If we build in more stuff we could create a course for a low fee and get funded that way for integration into Blender. Besides we are still depended for option 2 for the way the viewport project is gonna turn out and the way are connected to the other Blender dev’s.

C) Integration in Blender. Ultimate goal.

  1. Either a or b , but freestyle has to be incorporated
    a) Or studying the renderpipeline for cycles and copy that for our NPR renderer
    b) If the viewport project incorporates a better OpenGL render we could use that way (which I hope is way more easier).
  2. Creating the layered ui.
  3. Shaders are already in place.

D) 1) Make courses and trainings, we need funds to expand and incorporate more futures. This is not only for the general public but mainly for serious artists who want to earn money with this. I see the Asian countries as prime targets, but Western European countries as well. Mainly targeting at universities and small studio’s who want to operate cost efficiently.
2) Making a kickstarted movie with Blender NPR (preferably anime) . I am pretty serious about this and strongly believe that if well planned and executed we can get this off the road. But we need point C first. Besides if Gooseberry is a success then what will the next step be for the Blender Institute to grow? Ok fx and such but think that an NPR movie could be a nice side step.

I will absolutely commit myself to the coding part but these phases will take years not months. Everybody has to understand that up in front. So good projectmanagement and working together is key.

Please comment as much as possible so we get a common view. This is important for the presentation in the BCONF 2014 as well.


#2

I’ll answer part by part.

  1. I’m open to suggestions.
  2. Noted. Hence we need to leverage coders who know better
  3. OpenGL (GLSL) is the only option in the Open Source route. We don’t have an option.

Part A:

  1. Agree on showing goal quickly
  2. Code can be tested but what other dependencies? ie: mesh loading
  3. Open for this option
  4. Not sure about this part. Can’t add any insightful comment.
  5. Yup.
  6. Coding as in how to get it done for the external renderer?

a. Agree
b. Agree, this is a must, not an option.
c. Big problem here

Part B:

  1. This is a big assumption. [very dislike]
  2. The history of Freestyle repeat. Time is of the essence.

I don’t think part A and B are wise.

  • Integration will take very long.
  • Funding… It will cost more
  • Hard to test the architecture outside of Blender

Part C:
1a. This is the better option IMO.
1b. This is not an “IF,” BEER is Blenderheads desperate requests. Also Open Subdiv 3 integration will need OpenGL upgrade (which isn’t well communicated by BF).
2. This is the UI, we can put last. But still easier to code.
3. Not sure what you mean about this.

Part D:

  1. This is when BEER is done. So it is a big assumption.
  2. Another big assumption on the later part. Kickstarter >> fund BEER better. XD

Jokes a side. This route (A,B) disconnect us more from core development team. C is the proper way as Ton put it. BNPR Store will always be providing quality courses regardless of BEER. I’m currently planning a multipart course on NPR scene building in Blender. It works regardless of which render engine (even works in GLSL). This course will also fund BEER development.

I will add more of my thoughts on this later.


#3

I would keep my suggestions simple: Forget how to manage the project for now, and focus on what you want from artists’ view points.

Blender NPR development in strong need of ideas about what features NPR artists wish to have in Blender. Ton said the BEER project has a lot of interesting ideas, but personally I am not still convinced about what BEER is eventually going to provide NPR artists with.

So,

  • Identify a small, very well defined set of functions you all guys agree to have as an initial step.
  • Describe the features by example (by Photoshopping expected renders and mocking the UI up).
  • Try to provide coders with a concrete and detailed specification of the features and convince them that implementing the described features is feasible.

Be specific and precise as much as possible. No additional general stuff is necessary.

Assume that you have full-time paid developers for the project. Don’t worry about funding for now. I believe attracting more voluntary coders interested in the feature set is the way to go. And that is quite doable without funding.

These suggestions apply to both the project in general and the BConf presentation.

Thanks,
T.K.


#4

T.K.,

By that you mean a roadmap, which we don’t know how to do now. We need a coder to advise us on the way forward.


#5

No, I am not talking about a roadmap (that is general stuff).


#6

What I see as the basic requirements for basic BEER are

  1. OpenGL Upgrade (at least ogl3 to get viewport in better quality (codewise) to handle BEER)
  2. Shader layer system with shader modifier
  3. Better NPR shader Primitives (few basics ie, Light dependent, Light & view dependent, shadeless)
  4. Combine pass rewrite (from BI)

The later requirements are
a. Animating shader UI
b. Texture rewrite (from BI)
c. Compositor rewrite (from BI)
d. Lighting rewrite, with lighting management system (from BI)
e. Shader layer UI

But from 1 to 4, those are huge!

Edit:
Feature by feature won’t give a good idea, but describing the paradigm shift with a small amount of features will show the essence of BEER.


#7

I think what TK meant is, to show different 3D artist how they can use BEER to improve their work flow.
example BEER have shader layers system, and how this layers system will benefits them.


#8

Good point, mclelun. Indeed I meant a detailed description of BEER features from coders’ POV, but the same description for sure serves as a proposal of improved NPR workflows to artists.

For instance, I was about to ask where the 2D color ramp goes in the planned BEER architectural components and how it interacts with existing Blender tools and new BEER elements. Answering to these questions will help make the planning concrete and easy to understand from both coders and artists view points.


#9

To answer where X-toon (2D color ramp) goes in BEER:

X-toon goes where current 1D color ramp is. It is easier to say that X-toon and 1D color ramp are the same shader modifier. The only thing that differ between them is x-toon has

  1. mapping type (square or radial)
  2. algorithm for the 2nd axis.
  3. parameter for each algorithm

The 2nd parameter (square) can be height, depth, depth from object, speed/velocity, keyed (slider) etc
For radial, you can map contour shading, simplified normal etc


#10

It should be more visual, where buttons are, how the widget looks. Thats a bit harder then just describing it (aka takes more time).

Still to wrap things up from all the discussions in the forum:

  • We do not go outside, we stay inside.
  • There are 2 main streams
    a) the null render based on cycles
    b) the ui
  • We should specify what we want in ui.

#11

For those who do not following our discussion, here are the rough plan:

Null render
Is the render engine without a renderer, the purpose of doing is to bring in all the needed data to build a render engine. With the null render ready, it serve as the base for building the renderer.

BEER UI
This is the UI coded in Python. This serve as the base design for UX and features from the user perspective. With the UI starting coding, we can test and polish it. Also, user can see how BEER works, its workflow and provide constructive feedback to improve it.


#13

Yafaray/luxrender might be a better model than cycles at least for the prototype stage. it offers better portability and shorter iteration cycles than cycles (as it doesn’t require rebuilding blender each time). within a few hours, i was able to strip it down to the bare minimum.

While the coders work, the artists among us could get started on the NPR movie project. It would be a very good way to draw attention and funding especially if it was rendered with the BEER prototype.


#14

hmm, yeah we sniffed a bit about luxrender (good documentation) but the thing was we do not want to use an external renderer, but make it internal.


#15

i completely agree that BEER needs to be made internal. But the prototype itself could be external because like you pointed out in the first post, we need to get something tangible quickly in order to show everyone BEER’s interface and workflow.

By the time the prototype is finished, some of us would have had enough time to study blender’s code extensively. This would make it much easier to internalize BEER.

External renderers in blender have the ability to access a lot of blender’s functionality such as compositing, material previews and even updating the viewport. So for now at least, there’s little to lose by making it external.


#16

Well we had this discussion as well. But internally or external if you can make it in Python it would not be any problem. Only thing I was interested in is how to load an opengl library in Blender if that is already loaded. Calling for gl.something could be seen by two handlers. But my experience in Python is low and my understanding of connecting a python opengl library even less. This has a impact on the internal/external discussion as well. If stuck for opengl 2.0 then internal but if opengl 3+ is needed then external unless the problem is solved. I am still doing some tuts to get the grips with python but I see there are some very easy ways to get a render up fast (and I mean fast) because you can easily access the py lybrarys from Blender itself and use a scene to render (without difficult textures etc).


#17

The BNPR team has a short movie project to showcase BEER.
http://projectfalcon.org


#18

Hello @khalibloo and welcome! :slight_smile:

The main problem we saw in that plan is that a prototype must be something that does the same things as the final product.
In this case, using an external renderer means things should be done in a VERY different way… so at the end you don’t learn much in the process and you have to redo all the work later.

Also an external renderer has many limitations that would be very difficult to manage for a project like BEER.

Right now the consensus is that we need to see what will happen with ViewPort FX and OpebGL3+ support: if that’s relatively fast (6-12 months) we can start work on learning how to set up a renderer (the “null renderer” idea) and then start using the ViewPort FX branch to use OpenGL 3 and work on the “real” BEER renderer.

Meanwhile we can start to understand how to create a nice and comfortable user interface using the existing UI controls or eventually trying to design and code new ones (I can’t see any decent “layer” interface and @Light_Bwk assures me that layers are something we really need to have :slight_smile: )


#19

I see… so that gives us enough time to get familiar with blender’s code.

Just out of curiosity, which limitations of the external renderer approach put you off? It appeared to me as though external renderers had just about every privilege that the internal ones have, including bgl (blender’s opengl wrapper).

As for the interface, yeah i saw some problems the first time i looked at the concept. Especially with the folders and the modifiers bundled with each shader layer. With blender’s current set of UI elements, it’s gonna be really tough to implement such a design.

One idea i’ve been playing with in my head is the texture stack model! the texture stack is essentially a layer system. That way, the user is able to access only the active layer’s settings and modifiers at a time (and not get overwhelmed). It also frees a lot of UI space.
It still wouldn’t solve the folders problem, though.


#20

Currently Blender don’t have gbuffer. Texture can’t be stacked as in the concept. I saw devs discussing to include that after artists complain of how hard to only paint 1 layer. But need OpenGL3 upgrade.

Can you elaborate on…

  1. UI element difficulty
  2. Folder problem

For me the folder problem is easy, it is just UI, the data in it stays the same.


#21

There is a very limited number of UI elements in blender, especially ones that suggest hierarchy. For example in the concept, it goes something like this Folder > Folder > Shader Layer > Shader Modifier (not forgetting that a folder could potentially contain other folders). If we try to represent the example in blender’s UI, we might have something like this Panel > Panel > column.boxbox > column.boxbox.

The concept art represented the folder as a panel
The first problem here is that a panel can’t contain another panel in blender. As in a folder in a folder.
Secondly, if the folders themselves are panels, all sorts of issues arise. blender registers panels in a First-come-first-serve model. therefore, newly created panels go to the bottom of the page.

This layout is made up of a column with the align option set to True, and 2 boxes aligned together. one for the header and one for the body.
A layout like this (shader layer) can’t contain another similar layout (its modifiers)
A box in a column can’t contain another box in a column. blender doesn’t seem to recognize a column in a box.

I’m working on a mockup, perhaps it will help me explain better.