måndag 20 mars 2017

Low-poly versions


Assets from Daz Studio are usually awesome, but they often have a high polygon count. This can give problems with performance, especially if you like me use an old Win 7 box from 2009 with only six gigs of ram. Often it is possible to reduce the polygon count with little effect on the final renders.

This is where the Low-poly Versions section at the bottom of the Setup panel comes in. In the unstable version it looks like this

 Here is the original character, weighting in

Verts: 21556, Edges: 42599, Faces: 21098

The Make Quick Low-poly button reduces her footprint to

Verts: 7554, Edges: 17800, Faces: 10311

i.e. the number of vertices is almost reduced to a third.

However, there are a number of problems, e.g. the black spots in the should areas. The quick low-poly uses Blender's Decimate modifier, which does not respect UV seams. A decimated face can contain vertices in different UV islands, making the middle of the face stretch over random parts of the texture.

Moreover, since the quick low-poly applies a modifier, the mesh must not have any shapekeys. This is not such a big problem for Genesis 3, where facial expressions are implemented with bones, but for Genesis and Genesis 2 this means no face expressions. The Apply Morphs button is duplicated in this section to get rid of any shapekeys before making a quick low-poly.

To deal with these problems, a second algorithm has been implemented, the faithful low-poly. It is faithful in the sense that UV seams are respected, so there are no faces stretching over unknown parts of the texture.

The footprint of the faithful low-poly is

Verts: 11885, Edges: 21136, Faces: 9306

This is slightly higher than the quick low-poly, but still a nice reduction.

There are still some problems. In the illustration the shoulders become very jagged when posed. This can be fixed by adding a Subdivision Surface modifier. If the number of view subdivisions is set to zero, the viewport performance should not suffer too much.

Another problem is that the algorithm produces n-gons, which sometimes leads to bad results. This can be fixed by the Split n-gons button.

After triangulating n-gons the character weighs

Verts: 11885, Edges: 31111, Faces: 19281

The number of  edges and faces has gone up considerably, but I'm not sure that this affects performance, since complicated n-gons have been replaced by simple triangles. The number of vertices stays the same, which is what I think is most important for performance.

Hair can be particularly problematic for performance. The Aldora hair, which came with some earlier version of Daz Studio, has the impressive footprint

Verts: 123506, Edges: 240687, Faces: 117264

Reducing the weight of a 21,000 verts character makes little sense if we leave her with 123,000 verts worth of hair.

Making a faithful low-poly of the hair reduces the footprint to

Verts: 32574, Edges: 61862, Faces: 29370

without any notable reduction of quality.

A second iteration of the Faithful Low-poly button reduces the hair further

Verts: 9281, Edges: 16743, Faces: 7544

Compared to the original 123,000 verts, the footprint has gone down with more than a factor of ten!

We now start to see some bald spots on the head, but it should not be too difficult to fix them in edit mode.

If we instead made a quick low-poly in the last step, the footprint became

Verts: 10081, Edges: 20047, Faces: 10048

The baldness problem is perhaps a little less pronounced, but some manual editing is still needed.

Another way to reduce the poly-count for some hair types is provided by the Select Random Strands button. Here is another hair with an impressive footprint:

Verts: 114074, Edges: 167685, Faces: 55553

Not all of the strands are really needed, but it would difficult to select a suitable set manually.

We don't want the skull cap to be selected, so we hide it (select one vertex on the skull, use ctrl-L to select connected vertices, and H to hide).

Select Random Strands to make the selection, and then press X to delete the verts. The Keep Fraction slider was set to the default 50% in this case. There is some tendency to baldness in edit mode, but that is because the skull cap is still hidden. The render looks quite ok but the footprint has been reduced to

Verts: 56116, Edges: 82748, Faces: 27574

söndag 19 februari 2017

DAZ Importer version 1.1.

A new version of the DAZ Importer to Blender is finally available. The are many improvements compared to version 1.0, including
  • Multiple instances of the same asset are treated as different objects.
  • Object transformations have been improved.
  • Roll angles are chosen to make X the main bend axis, necessary for the MHX and Rigify rigs.
  • Improved material handling, utilising DAZ Studio materials.
  • Experimental import only using information in DAZ files.
  • and many, many bug fixes.

Download link: https://www.dropbox.com/s/i4aoh578u6ss2hz/import-daz-v1.1-20170222.zip

  1. Save the zip file somewhere on your computer.
  2. In Blender, go to File > User Preferences > Add-ons
  3. Press Install From File... and select the zip file.
  4. Enable the DAZ importer.
For more information see: http://diffeomorphic.blogspot.se/p/daz-importer-version-11.html

And here I added a quick animation with MakeWalk.

tisdag 27 september 2016

What's Missing in Quantum Gravity IV: Objections

Today I sum up some of the more obvious objections to the arguments presented in the previous posts, and explain why these objections are not relevant.

1. The diffeomorphism algebra in $d$ dimensions has no central extension at all when $d>1$

This is correct, and would appear to be fatal blow to the existence of a multidimensional Virasoro algebra since the the ordinary Virasoro algebra is precisely the central extension of the diffeomorphism algebra on the circle. However, we saw in the previous post that the Virasoro extension in higher dimensions is not central, i.e. it does not commute with all diffeomorphisms. The Virasoro algebra in $d$ dimensions is essentially an extension of the diffeomorphism algebra by its module of closed $(d-1)$-forms. When $d=1$, a closed zero-form is a constant function, and the extension is a constant. In higher dimensions, a closed $(d-1)$-form does not commute with diffeomorphisms, but we still have a well-defined Lie algebra extension, not just a central one.

2. In QFT, there are no diff anomalies at all in four dimensions

Again this statement is correct and apparently fatal, because an extension of the diffeomorphism algebra is a diff anomaly by definition. However, the caveat is the phrase "in QFT". Virasoro-like extensions of the diffeomorphism algebra in four dimensions certainly exist, but they cannot arise within the framework of QFT. The reason is simple: as we saw in the previous post, the extension is a functional of the observer's trajectory $q^\mu(t)$, and since the observer does not explicitly appear in QFT, such a functional can not be written down within that framework. To formulate the relevant anomalies, a more general framework which explicitly involves the observer is needed.

3. Diff anomalies are gauge anomalies which are always inconsistent

In contrast to the first two objections, this statement is blatantly wrong. Counterexample: the free subcritical string, which according to the no-ghost theorem can be consistently quantized despite its conformal gauge anomaly. Of course, this does not mean that every kind of gauge anomaly can be rendered consistent; the free supercritical string and the interacting subcritical string are examples where the conformal anomaly leads to negative-norm states in the physical spectrum and hence to inconsistency. But the crucial condition is unitarity rather than triviality.

A necessary condition for a gauge anomaly to be consistent is clearly that the algebra of anomalous gauge transformation possesses non-trivial unitary representations. The Mickelsson-Faddeev (MF) algebra, which describes gauge anomalies in Yang-Mills theory, fails this criterion. It was shown by Pickrell a long time ago that the MF algebra has no "nice" non-trivial unitary representations at all. Therefore, Nature must abhor this kind of gauge anomaly, which of course is in agreement with experiments; gauge anomalies in the Standard Model do cancel. But this argument has no bearing on the situation where the algebra of gauge transformations does possess "nice" unitary representations.

4. Gauge symmetries are redundancies of the description rather than proper symmetries

This is only true classically, and quantum-mechanically in the absense of a gauge anomaly. A gauge anomaly converts a classical gauge symmetry into a ordinary quantum symmetry, which acts on the Hilbert space rather than reducing it. As an example we consider again the free string in $d$ dimensions. Classically, the physical dofs are the $d-2$ transverse modes, and this remains true in the critical case. In the subcritical case, however, there are $d-1$ physical dofs, because in addition to the transverse modes the longitudinal mode has become physical; time-like vibrations remain unphysical.

5. There are no local observables in Quantum Gravity

This is essentially the same objection as the previous one, and it assumes that there are no diff anomalies. There can be no local observables in a theory with proper diffeomorphism symmetry because the centerless Virasoro algebra has no nontrivial unitary representations. If the diffeomorphism algebra acquires a nontrivial extension upon quantization, there is no reason why local observables should not exist.

Note that the same argument applies to Conformal Field Theory (CFT). There are no local observables in a theory with infinite conformal symmetry, but that is not a problem in CFT because the relevant symmetry is not infinite conformal but Virasoro; the central charge makes a difference.

lördag 17 september 2016

Barefoot dancer

Since the latest release of the DAZ-Blender importer I have mainly been tweaking materials. It is pretty important that the importer produces attractive materials automatically, because DAZ meshes contain lots of materials, and modifying all of them manually involves a lot of work. E.g., I think that a human uses some seventeen different materials for different parts of the body. This has some advantages in DAZ Studio, because you can easily add makeup by modifying the lip, eyelash, fingernail or toenail materials, but having so many materials does complicate editing afterwards in Blender

As a benchmark I will use the barefoot dancer is a standard product that ships (or used to ship, at least) with DAZ Studio. You can find pictures of her with Google, but since I don't know if it is legal to include other people's art on your blog (it is at least impolite), you have to look them up yourself.

So here is what I did. First I created to DAZ Studio (.duf) files, one containing the character in rest pose and the other the stage (Shaded haven). Then I created two Blender files. In the first I imported the stage, which required some tweaking because the importer does not always get object transformations right (working on this). Into the other I imported the character and merged the rigs (character, hair and each piece of clothing have separate rigs). Actually, every import was made twice, once with Blender Render being the render engine, in order to get the Blender Internal (BI) materials, and once with Cycles to get Cycles materials.

Finally, the set file was linked into the character file and the animation was loaded. I also loaded expression and chose something suitable, and created a simple three-point lighting setup. Here is a render using Blender Internal.

If you change the render engine to Cycles, the importer creates a Cycles material instead. Now, I have never really used Cycles and I don't understand how to control lights in it - it seems like most of the lighting comes from a world surface light whereas the actual lamps play a very marginal role. But even if the lighting is extremely dull, I think the materials look reasonably good.

The renders were made with the unstable version of the DAZ Importer, which can be downloaded from


onsdag 14 september 2016

What's Missing in Quantum Gravity III: The Connection Between Locality and Observer Dependence

In the two first posts (1, 2) in this series I argued that the missing ingredients in Quantum Gravity (QG) are locality and observer dependence, respectively. Now it is time to make the connection between these concepts. But before making this connection at the end of this post, we need some rather massive calculations. The main reference is my CMP paper (essentially the same as arXiv:math-ph/9810003). That paper is somewhat difficult to follow, even for myself, so I recently updated it using more standard notation, arXiv:1502.07760.

Local operators in QG are only possible if the spacetime diffeomorphism algebra in $d$ dimensions, also known as the Lie algebra of vector fields $vect(d)$, acquires an extension upon quantization. This turns $vect(d)$ into the Virasoro algebra in $d$ dimensions, $Vir(d)$. In particular, $Vir(1)$ is the ordinary Virasoro algebra, the central extension of the algebra of vector fields on the circle, $vect(1)$. $Vir(1)$ is also a crucial ingredient in Conformal Field Theory (CFT), which successfully describes critical phenomena in two dimensions.

There is a simple recipe for constructing off-shell representations of $Vir(1)$:
1. Start with classical densities, which in the context of CFT are called primary fields.
2. Introduce canonical momenta for all fields.
3. Introduce a vacuum state which annihilates all negative-frequency operators, both the fields and their momenta.
4. Normal order to remove an infinite vacuum energy.

The last step introduces a central charge, turning $vect(1)$ into $Vir(1)$.

Intuitively one could try to duplicate these steps in higher dimensions, but that does not work. The reason is that normal ordering introduces an unrestricted sum over transverse directions, which makes the putative central extension infinite and thus meaningless. Instead we introduce a new step after the first one. The recipe now reads:

1. Start with classical tensor densities.
2. Expand all fields in a Taylor series around a privileged curve in $d$-dimensional spacetime, and truncate the Taylor series at order $p$.
3. Introduce canonical momenta for the Taylor data, which include both the Taylor coefficients and the points on the selected curve.
4. Introduce a vacuum state which annihiles negative frequency states.
5. Normal order to remove an infinite vacuum energy.

A $p$-jet is locally the same thing as a Taylor series truncated at order $p$, and we will use the two terms synonymously. Since the space of $p$-jets is finite-dimensional, the space of trajectories therein is spanned by finitely many functions of a single variable, which is precisely the situation where normal ordering can be done without introducing infinities coming from transverse directions.

Locally, a vector field is of the form $\xi^\mu(x) \partial_\mu$, where $x = (x^\mu)$ is a point in $d$-dimensional space and $\partial_\mu = \partial/\partial x^\mu$ the corresponding partial derivative. The bracket of two vectors fields reads
[\xi, \eta] = \xi^\mu \partial_\mu \eta^\nu \partial_\nu -
\eta^\nu \partial_\nu \xi^\mu \partial_\mu.
$vect(d)$ is defined by the brackets
[{\cal L}_\xi, {\cal L}_\eta] = {\cal L}_{[\xi,\eta]}.
The extension $Vir(d)$ depends on two parameters $c_1$ and $c_2$, both of which reduce to the central charge in one dimension:
[{\cal L}_\xi, {\cal L}_\eta] = {\cal L}_{[\xi,\eta]} +
\frac{1}{2\pi i} \oint dt\ \dot q^\rho(t) \Big(
c_1 \partial_\rho \partial_\nu \xi^\mu(q(t)) \partial_\mu \eta^\nu(q(t)) \\
\qquad\qquad + c_2 \partial_\rho \partial_\mu \xi^\mu(q(t)) \partial_\nu \eta^\nu(q(t)) \Big), \\
{[}{\cal L}_\xi, q^\mu(t)] = \xi^\mu(q(t)), \\
{[}q^\mu(t), q^\nu(t')] = 0.
In particular when $d=1$, vectors only have a single component and we can ignore the spacetime indices. The extension then reduces to
\frac{1}{2\pi i} \oint dt\ \dot q(t) \xi^{\prime\prime}(q(t)) \eta^\prime(q(t))
= \frac{1}{2\pi i} \oint dq\ \xi^{\prime\prime}(q) \eta^\prime(q),
which has the ordinary Virasoro form. The terms proportional to $c_1$ and $c_2$ where first written down by Rao and Moody and myself, respectively.

To carry out the construction explicitly for tensor-valued $p$-jets is quite cumbersome, but in the special case that we deal with $-1$-jets (which depend only on the expansion point, and not on any Taylor coefficients at all), formulas become manageable. Consider the Heisenberg algebra defined by the brackets
[q^\mu, p_\nu] = i\delta^\mu_\nu, \qquad
[q^\mu, q^\nu] = [p_\mu, p_\nu] = 0.
Clearly we can embed $vect(d)$ into the universal enveloping algebra of this Heisenberg algebra as follows:
{\cal L}_\xi = i\xi^\mu(q) p_\mu.
This embedding immediately yields a representation of $vect(d)$ on $C^\infty(q)$, the space of smooth functions of $q$, and also on the dual space $C^\infty(p)$. However, neither of the representations are particularly interesting because they do not exhibit any extension. However, with a slight modification of the construction, we immedately obtain something interesting. Consider the infinite-dimensional Heisenberg algebra where everything also depends on an extra variable $t$, which lives on the circle. It is defined by the brackets
[q^\mu(t), p_\nu(t')] = i\delta^\mu_\nu \delta(t-t'), \qquad
[q^\mu(t), q^\nu(t')] = [p_\mu(t), p_\nu(t')] = 0.
The new embedding of $vect(d)$ reads
{\cal L}_\xi = i \oint dt\ \xi^\mu(q(t)) p_\mu(t).
This operator acts on the space $C^\infty[q(t)]$ of smooth functionals of $q(t)$, and, after moving the momentum operator to the left, on the dual space $C^\infty[p(t)]$. Neither of these representations yields any extension.

However, there is a more physical way to split the Heisenberg algebra into creation and annihilation operators. Since the oscillators depend on a circle parameter $t$, they can be expanded in a Fourier series, and we postulate that the components of negative frequency annihilate the vacuum state. If $p^>_\mu(t)$  and $p^<_\mu(t)$ denote the positive and negative frequency parts of $p_\mu(t)$, respectively, and analogously for $q^\mu(t)$, the state space can be identified with $C^\infty[q_>(t), p^>(t)]$. We still have a problem with a infinite vacuum energy, but this can be taken care of by normal ordering. Hence we replace the expression for ${\cal L}_\xi$ by
{\cal L}_\xi = i \oint dt\ :\xi^\mu(q(t)) p_\mu(t): \\
\equiv i \oint dt\ \Big(\xi^\mu(q(t)) p^<_\mu(t) + p^>_\mu(t) \xi^\mu(q(t)) \Big).
This expression satisfies $Vir(d)$ with parameters $c_1 = 1$, $c_2 = 0$.

Finally, we can make the connection between locality and observer dependence. The off-shell representations of $Vir(1)$ act on quantized fields on the circle, and may therefore be viewed as one-dimensional Quantum Field Theory (QFT). In higher dimensions $Vir(d)$ act on quantized $p$-jet trajectories instead, so the theory deserves the name Quantum Jet Theory (QJT). $p$-jets (Taylor series) do not only depend on the function being expanded but also on the choice of expansion point, i.e. the observer's position.

The following diagram summarizes the argument:

Locality => Virasoro extension => $p$-jets => observer dependence.

söndag 11 september 2016

What's Missing in Quantum Gravity II: Observer Dependence

In the previous post I explained why locality in Quantum Gravity (QG) requires that the spacetime diffeomorphism group acquires an extension upon quantization. Now I will turn to a more physical viewpoint and argue that what also is missing in all approaches to QG is a physical observer. At first sight it may not be obvious that the two concepts have anything to do with each other, but they are in fact closely related which I will discuss in a later post. However, for the time being I will content myself with the following trivial observation:

Every real experiment is an interaction between a system and an observer, and the outcome depends on the physical properties of both. In particular, the result depends on the observer's mass.

This may seem as an innocuous observation, until you realize that neither General Relativity (GR) nor Quantum Field Theory (QFT) make predictions that depend on the observer's mass. So clearly there must be a more general, observer-dependent, theory that reduces to GR and QFT in the appropriate limits. A little thought reveals that these limits are:
  • In GR, the observer's heavy mass is assumed to be zero, so the observer does not disturb the fields.
  • In QFT, the observer's inert mass is assumed to be infinite, so the observer knows where he is at all times. In particular, the observer's position and velocity at equal times commute.
Since the equivalence principle states that the heavy and inert masses are always the same, we see that GR and QFT tacitly make incompatible assumptions about the observer's mass. It then comes as no surprise that the theories cannot be combined.

The assumption about the small heavy mass is very intuitive: if the observer had a large heavy mass, he would immediately collapse into a black hole, which is not what happens in a typical experiment. However, the assumption about the infinite inert mass requires some more explanation. Here my philosophy is completely operational: in order to know something, you must measure it. In particular, in order to know where he is, the observer must measure his position, e.g. with a GPS receiver. In theory, that can be done with arbitrary accuracy. However, in order to know where he will be in the future, the observer must also be able to measure his velocity at the same time, but Heisenberg's uncertainty principle tells us that there is a limit to the precision with which the position and the momentum can be simultaneously known.

More precisely, let $\Delta x$ and $\Delta v$ denote the uncertainties in the observer's position and velocity, and assume that momentum and velocity are related by $p = Mv$, where $M$ is the observer's mass. Then
\Delta x \cdot \Delta v \sim \hbar/M,
where $\hbar$ is Planck's constant. Hence there are only two situations in which both the observer's position and velocity can be known arbitrarily well:
  • If $\hbar = 0$, i.e. in classical physics including GR. In this limit we can assume $M = 0$.
  • If $M = \infty$, which only makes sense if gravity is turned off. In this limit we have QFT in flat space.
In these two limits we can use field theory. In the general situation where $\hbar/M \neq 0$, field theory breaks down and QG must be a more general type of theory which incorporates a physical observer explicitly.

An analogous statement can be made about other interactions than gravity: field theory assumes that the observer's charge is zero and his inert mass is infinite. However, this double limit does not pose a problem for non-gravitational interactions, because charge and mass are unrelated. So even if non-gravitational physics depends on the observer's charge in principle, this dependence is merely an experimental nuisance that can be eliminated. In gravity, where charge and mass are the same, this nuisance becomes a conceptual problem.

Rovelli makes the distinction between two types of observables: partial observables, which can be measured but not predicted, and complete observables, whose time evolution can be predicted by the theory and which are subject to quantum fluctuations. In Quantum Mechanics, there are two types of partial observables: $A$, the reading of the detector, and $t$, time measured by a clock, which combine into the complete observable $A(t)$. In QFT, there is a third type of partial observable: $x$, the position measured e.g. by a GPS receiver, and the complete observables are of the form $A(x,t)$. However, beneath this expression lies an unphysical assumption: that the pair $(x,t)$ is a partial observable that can only be measured but not predicted. A real observer obeys his own set of equations of motion, so his position at later times $x(t)$ can be predicted. Let us now change notation and call this observable $q(t)$ instead, because $x$ will be reused below.

In a physically correct treatment, we combine the three partial observables of QFT into two complete observables: $A(t)$ and $q(t)$, the readings of the detector and GPS receiver at a certain tick of the clock. However, we immediately notice that a lot of information is gone: we no longer sample data throughout spacetime but only along the observer's trajectory. Fortunately, this is not as disastrous at it might seem at first glance, because the available local data includes not only the values of the fields A but also their derivatives $d^m A/dx^m$ up to arbitrary order. We can assemble the local complete observables into a Taylor series,
A(x,t) = \sum_m \frac{1}{m!} \frac{d^m A}{dx^m}(t) (x-q(t))^m
This formula looks one-dimensional, but with multi-index notation it makes sense in higher dimensions as well. To the extent that we can identify an infinite Taylor series with the field itself, the original field has been recovered entirely, but with a twist: the Taylor series does not only depend on the field, but also on the expansion point $q(t)$, an operator which we identify with the observer's position.

In the next installment in this series this expression will be used to make the connection to locality and the multi-dimensional Virasoro algebra from the previous post.

fredag 9 september 2016

What's missing in Quantum Gravity I: Locality

I'm planning to put up some pretty nice renders of characters imported from DAZ studio into Blender, but before doing that I will write about something that I have thought about for a long time and think that I have some rather unique insights about (and something that motivates the name of this blog): how to construct a quantum theory of gravity. For a more exhaustive discussion of the topic in this and the next post, see arXiv:1407.6378 [gr-qc].

In the beginning of the previous century, two great theories were found that describe all of physics: Einstein's General Relativity (GR), which describes gravity, and Quantum Theory (including Quantum Mechanics and Quantum Field Theory (QFT)), which describes everything else. The problem is that these theories seem to be mutually incompatible. But we know that Nature exists and hence cannot be inconsistent, so a consistent theory of Quantum Gravity (QG) has to exist, and it must reduce to GR and QFT in appropriate limits.

For almost a century many great minds have tried to solve this conundrum. The most popular approaches in recent decades have been string theory and, to a lesser extent, loop quantum gravity. However, these and other approaches all have problems of their own, especially since the LHC recently has ruled out supersymmetry beyond reasonable doubt (which, incidentally, is not the same thing as beyond every doubt).

My own proposal is to go back to basics, and simple postulate that QG combines the main properties of GR and QFT: gravity, quantum theory, and locality.

Now, this may seem as a very natural and innocent assumption, but it is in fact very controversial. The reason is that there is a well-known theorem stating that there are no local observables at all in QG, unless classical and quantum gravity have different sets of gauge symmetries. Actually, if you have seen this statement before, it was certainly without the caveat, but it must be mentioned because it is the weak spot of the no-go theorem. In order to prove this, one assumes that the group of all spacetime diffeomorphisms, which is the gauge symmetry of classical GR, remains a gauge symmetry after quantization.

So, we need something that converts spacetime diffeomorphisms from a classical gauge symmetry to an ordinary quantum symmetry, which acts on the Hilbert space rather than reducing it. This something is called an extension. It is well known that the diffeomorphism algebra (the infinitesimal version of the diffeomorphism group) on the circle admits a central extension called the Virasoro algebra. This is a celebrated part of modern theoretical physics. It first appeared in string theory, but later found an experimentally successful application in condensed matter, in the theory of two-dimensional phase transition.

So the diffeomorphism algebra in one dimension has an extension, but we know that spacetime has four dimensions (at least). So we need a multi-dimensional Virasoro algebra, which is a nontrivial extension of the diffeomorphism algebra in higher (and in particular four) dimensions. There are several arguments why such an extension cannot exist, and I will address those in a later post, but exists it does. In fact, there are even two of them, that were discovered 25 years ago by Rao and Moody and myself, respectively.