In my discussions with prospects and customers for DigiFabster the issue of “complexity” of models comes up regularly. However, due to the fact that we have been planning and executing a data entry overhaul for almost a year, we never really had a chance to focus on the issue.
Now the a.m. overhaul is in beta, things will run their course, and I had some time to analyze and synthesize my interview notes on the subject. The problem my interlocutors described was the following:
Most of the time, the volume of a model says very little about the resources in machine time, work and material that will be spent on it.
This has to do with a thing that for lack of a better word I have been calling “complexity”.
Why is complexity so important that it keeps popping up in conversations? Easy: We are in a business which competes with other manufacturing techniques, and which wins out only if the objects to be created are very hard or even impossible to make with traditional tools and traditional methods.
Nobody is going to order nails from a 3D print shop,
because it’s just a piece of steel wire with a point at one end and a cap at the other. Machines to make nails have been around for 150 years now and 3D printers are not going to replace them.
A 3D lattice,
on the other hand, as used in hollow objects to give them strength, is very hard to produce with traditional machinery. One could take a block of steel and drill it for a large number of times from all sides, and get something resembling a 3 dimensional lattice, but that’s a lot of work, plus a lot of material -the steel coming out of the drill holes- wasted. Or one could point-weld wires together, but that’s a lot of welds on a small volume.
On a 3D printer creating a 3D lattice is easy, but still expensive. The problems one faces are: huge print files, long print times and the difficult y of cleaning-up the print afterwards. This cleaning-up leads to high costs in manual labor and material waste.
What I will try to do in this article is to combine simple figures, describing simple geometrical features of an object, to pin down this complexity and define it without having to look at, or understand, the object. In fact, to think of a way to make the computer do the work.
In the lattice example above we, incidentally, have one first indication of how to judge “complexity” from numbers, without looking at the model: A complex model tends to have a big file size.
Another indicator is the relatively small volume of the object as compared to the volume of the boundary box it occupies. A third, and that’s the one I want to go into, is the ratio between volume and surface of an object.
Consider the least complex 3D object possible:
a sphere. A sphere has maximum volume for minimum surface and is basically the form matter will revert to if no restrictions apply.
Starting from a sphere with a diameter of 40mm (the standard in Windows 3D Builder) I get a volume of 33438mm3 and a surface of 5020mm2. Going to a slightly more complex shape, a cube, I take the cube root of the volume to get the length of the sides of the cube: 32,2166 mm. Surface area of the cube thus becomes:
6x32,2166x32,2166=6227,4672mm2, which is 25 % more than the sphere had. I then cut the cube in half and make a T. Volume is of course the same, but surface is now 7784mm2, so already 55% bigger than that of the sphere I started with.
I think it’s evident that I can keep complicating my object without increasing it’s volume, and there is hardly any limit to the surface I can get, provided there’s no limit on the minimal size of the features I’m forming. But there are minimum feature sizes in 3D printing, so to keep things real:
I can turn the original volume of the sphere of 33438mm3 into a wire with a diameter of one millimeter. I would get a wire with a length of 42597mm (43 meters) length and a surface of 133752mm2. I could curl up that wire like spaghetti and get a boundary box not very much bigger than the one which I started from, 40x40x40, with the same model volume, but with a surface 400% as big. It would look something like this:
Such a model would be much more expensive to produce on a 3D printer than the original sphere with the same volume, for the following reasons:
-Material use: Depending on the technology, let’s say FDM, the sphere would normally be filled with a filling pattern with a fill density of 15-25%, but the wire will be solid. So even as the volume doesn’t increase, the material use will, by roughly 300 % in this case. In 3DP, as another example, the use of binder would go up at a similar rate. Laser sinterers would lose a lot of powder which gets half sintered left and right of the scan paths, etc.
-Machine time: Again with the example of FDM: Filling a hollow sphere is straightforward, up and down, the printer hot end has a high horizontal speed when it fills the sphere. In the case of the spaghetti mentioned above, it will have to keep changing direction, stopping and starting.
-Man-hours: Cleaning a sphere with a 40mm diameter would take 2 minutes, cleaning 42 meters of curled-up steel spaghetti is a different story altogether (and probably impossible).
Now the original question was: How to judge “complexity” just by the numbers, without looking at or understanding the model? The ratio between surface and volume seems to be a good starting point.
You would still have to compensate for scale: the smaller the object, the bigger relatively the surface: the original sphere with a diameter of 40 had a volume of 33438 and a surface of 5020, a ratio of 6,7:1, the same sphere with a diameter of 20 (scale 1:2) would have a volume of 4179mm3 and a surface of1255mm2, a ratio of 3,3:1, but it would be strange to say that a 20mm sphere is “more complex” than a 40mm sphere.
The scaling effect, however, can be eliminated by taking the square roots of the respective surfaces and the cube root of the respective volumes:
For the 40mm sphere we get: the square root of the surface of 5020 is 70,85, the cube root of the volume of 33438 is 32,22, 70,85/32,22 is 2,20.
Now the 20mm sphere: the square root of the surface of 1255,00 is 35,43, the cube root of the volume of 4179,00 is 16,11, 35,43/16,11 is 2,20. The "complexity constant" for a sphere is 2,20.
Now for two cubes, one with sides of 32,22, one with sides of 16,11 millimeters:
For the 32mm cube we get: the square root of the surface of 6227,47 is 78,91, the cube root of the volume of 33438 is 32,22, 78,91/32,22 is 2,45
For the 16mm cube we get: the square root of the surface of 1556,87 is 39,46, the cube root of the volume of 4179,75 is 16,11, 39/46/16,11 is 2,45. The "complexity constant" for a cube is 2,45.
Time for the steel spaghetti. As discussed it is a wire with a diameter of 1mm and a length of 43 meters, curled up on itself so as to fit a boudary box close to 40x40x40, and with a volume the same as the 40 mm sphere. The numbers:
43 meter steel spaghetti: the square root of the surface of 133752,00 is 365,72, the cube root of the volume of 33438 is 32,22, 365,72/32,22 is 11,35. Now we scale the steel spaghetti 1:2
22 meter steel spaghetti: the square root of the surface of 33438,00 is 182,86 the cube root of the volume of 4179,75 is 16,11, 182,86/16,11 is 11,35. The complexity constant for this particular bunch of steel spaghetti is 11,35, whatever the scale.
So the sphere has a “complexity constant”of 2,2, the steel spaghetti has a “complexity constant” of 11,35. I would propose to divide both numbers by 2,2, so the “complexity constant” for the sphere becomes 1, the complexity constant for the steel spaghetti becomes 5,15.
But now for the main question: should an online model price calculation program like DigiFabster incorporate “complexity constants” as a multiplier for predicting cost? Or would loading costs on the surface area of the model be sufficient? Or should another calculation method be used? For example, volume boundary box/surface area model? I can think of 10 more methods, but I’d rather have some feed back at this point. So I’m looking forward to reactions :-)