You can solve the un-routable blocks problem by having each of the second tier load balancers send traffic to more than one block (but less than all the blocks).
I haven’t run the simulation, but it seems like that might be a good middle ground.
Aphyr, on
Ah, thanks Sean. I’ll update the post. Formal definitions are hard!
Sean Cribbs, on
On the other hand, having a bound on the time a request can take before aborting is a way to achieve liveness, i.e. that a response happens, even if it is a failure state. A non-live system would let the request continue even if it had no chance of completing. In that case, the liveness property is “eventual service”. Aborting the request because of timeout ensures that a future resubmission has the possibility of completing successfully.
Sean Cribbs, on
The strict definition of liveness is: for any partial execution, there exists a sequence of events/steps which will achieve the desired outcome. That does not imply something completes in a finite amount of time – that’s a safety property, because there is a specific point in time at which it can be violated. Liveness properties are violated when there is a state in which the system can never achieve the desired outcome.
Also, temporality is hard.
Aphyr, on
Cloud Foundry’s next-generation router has very little testing, and we’re hesitant to move to it. It takes the odd approach of using a (very high speed) message bus with a subscription/first responder method to do the reverse proxy vs a frequently updated, but fundamentally static in-memory hash checked via embedded Lua in Nginx.
Without knowing your infrastructure… I blindly recommend haproxy to everyone who isn’t sure what load balancer to use. I’ve used (haproxy for load balancing) -> (nginx for rewrites and static files) -> (app servers) with excellent results. The least-conns balancer and health checks built into haproxy work nicely for seamless app deployment, and support for kernel TCP stream splicing minimizes the latency cost.
Aphyr, on
Very interesting! But I think you should edit the post to be a little less misleading wrt ‘Bamboo’ and ‘Rails’. Heroku’s random routing has nothing to do with Bamboo. It affects Cedar just as much. The only difference that’s relevant here is that Cedar supports multi-threaded app servers, of which there are many that work with Rails.
Since your post stands alone without ANY talk of Bamboo/Cedar or Rails, I hope you will consider removing that stuff.
The dynamics of this routing system depend on a.) the existence of per-dyno queues and b.) the invariance of the server response distribution. Neither of those assumptions applies to a concurrent-server stack like Cedar, as far as I understand from Heroku’s posts.
DynamoDB (or Cassandra, or whatever NoSQL distributed atomic counter du jour you like), for example, has no problem returning in 50ms against pretty sizeable read/write traffic, and it seems pretty trivial (and important!) to shard pools of dynos per router rather than trying to treat dynos as a globally shared pool of resources.
In theory, the top-level routing system is globally distributed, which means you could face inter-dc latencies for consensus; at least one round trip and likely more than that. There’s a reason folks don’t split Dynamo over multiple datacenters without special rack-aware reasoning; on inhomogenous networks you can get spectacular tail latencies. The problem is that the dynamics of the system–like dyno queue depths–varies on a shorter timescale than an inter-DC system can reach consensus, which renders any consensus-based approach useless. The only solutions I can think of are
1.) Go stateless
2.) Be stateful, but only over dynamics that change slowly, like overall DC load
3.) Localize state to a system with tight latency bounds, like a single DC or machine.
In practice, your DNS system is already balancing traffic on a geographic basis, which helps you choose #3; a hybrid stateless/short-latency CP system is what I describe in the second post.
Ralph, on
“Heroku can’t use round-robin or min-conns load balancers for their whole infrastructure–it’s just too big a problem to coordinate.”
What’s the basis for this statement? DynamoDB (or Cassandra, or whatever NoSQL distributed atomic counter du jour you like), for example, has no problem returning in 50ms against pretty sizeable read/write traffic, and it seems pretty trivial (and important!) to shard pools of dynos per router rather than trying to treat dynos as a globally shared pool of resources.
Oh. You cover this in your next blog post. Excellent. And well done. (although a bare < breaks your comment system…)
Ralph, on
“Heroku can’t use round-robin or min-conns load balancers for their whole infrastructure–it’s just too big a problem to coordinate.”
What’s the basis for this statement? DynamoDB (or Cassandra, or whatever NoSQL distributed atomic counter du jour you like), for example, has no problem returning in <50ms against pretty sizeable read/write traffic, and it seems pretty trivial (and important!) to shard pools of dynos per router rather than trying to treat dynos as a globally shared pool of resources.
Oh. You cover this in your next blog post. Excellent. And well done.
Troy Howard, on
At AppFog, we have a relatively simplistic LB/routing layer: round-robin selection across the LB to the cache/reverse-proxy, then randomized to the app instances behind it. We were just talking about changing the router code to a LRU style of routing (which wouldn’t really be very different than round-robin), but backed off on it for fear of fucking up what was already a pretty effective routing layer.
Mostly our impetus for change was that after a code-review, we reacted to the randomization with “there’s no way random could be better than something more intelligent”… but we backed off because we felt that introducing anything other than a constant-time algo to the routing pipeline was a recipe for dangerous outcomes.
It would be fun to model it this way and see which method proves better. Also, we’re on the non-bleeding-edge version of our Cloud Foundry components. Cloud Foundry’s next-generation router has very little testing, and we’re hesitant to move to it. It takes the odd approach of using a (very high speed) message bus with a subscription/first responder method to do the reverse proxy vs a frequently updated, but fundamentally static in-memory hash checked via embedded Lua in Nginx. Neither one is very confidence inspiring to me,
It looks kind of scary and we’re going to have to put it through a lot of serious testing before we release it, because at our scale we can’t handle anything slower than what we’re already working with, and theory is too hard to evaluate.
Thanks,
Troy
Stefan Wrobel, on
Please label your graph axes
angrycub, on
Kyle, Thank you for taking the time to write this post. Beautifully and thoughtfully articulated. Touching.
dude, on
Very interesting! But I think you should edit the post to be a little less misleading wrt ‘Bamboo’ and ‘Rails’. Heroku’s random routing has nothing to do with Bamboo. It affects Cedar just as much. The only difference that’s relevant here is that Cedar supports multi-threaded app servers, of which there are many that work with Rails.
Since your post stands alone without ANY talk of Bamboo/Cedar or Rails, I hope you will consider removing that stuff.
Matt Trifiro, on
Wow. What an extraordinary and straightforward explanation of a hugely complex topic.
Aphyr, on
Exactly, Andrew. Solving problems requires developing a notation and a model to understand and express them. I spend a lot of time drawing diagrams, but wherever possible I try to develop executable models of the problem; such that the explanation of the problem is the code. Cuts down on the number of internal mappings you have to keep in your head.
Anonymous coward, on
Bro I am gonna kick your harassers ass. I would imagine San Francisco being more accepting of people who differ from the privileged “straight white male” norm.
Andrew , on
Fascinating stuff. Taken from another angle, one could argue that once you had a working domain model, no matter how informal, it enabled you to make the insight detailed above - right?
Nicole Johns, on
Thank you. In my work life I often come into contact with people from ‘marginalized’ communities that have had life experiences much different than my own. And most of the time, we meet on the human level with respect and compassion and life is good. But other times, a person will only see my skin color, my education, my class and not see that deep inside this white, overly-educated person is a sister, an ally. I have had experiences which parallel some of their experiences, but they will never know that, because they see what they want to see and never stop to ask or listen. I am equally (at times)as guilty of assuming a person’s story based on outward appearances or my own experiences. So thank you for saying what needs to be said and reminding me what I need to keep working towards - a listening heart.
Soooo…Node.js is cancer. I feel bad for the naive idiots who followed the hype.
Aphyr, on
Momentum is only defined relative to a reference frame. If you accept special relativity’s postulate that inertial reference frames are indistinguishible, there’s no reason for a ship to match at any particular reference frame when it emerges from hyperspace.
“OK,” you say. “Our FTL drive behaves differently near gravity wells. At the end of a jump, when the ship returns to normal space, it sticks to the nearest heavy object. That’s what I mean by ‘lose all momentum’.”
General relativity is a local theory of spacetime, so any constraints we impose on this drive need to be phrased in terms of local spatial invariants. We could say that when a ship re-enters normal space, it does it in such a way that the instantaneous time differential of the local stress-energy tensor is zero, i.e. the force of gravity doesn’t change in direction or strength over time.
This still allows for spectacular relative velocities, because gravity falls off quadratically with distance. Sure, if you jump into earth orbit you’ll be stationary; but if you jump in to the system say, three days forward in earth’s orbit, all you have to do is wait and earth will smash in to you. Same deal, really.
“OK, so we constrain FTL exits such that our exit vectors put us in a stable orbit.” Worse. Now you’re allowed to pick extreme orbits. Extremely elliptical solutions to solar orbits can crash you into planets at fantastic speed.
“Circular orbits?” Not sure what physical reasoning would lead to this one, but Earth’s orbit has exceptionally low eccentricity so it would probably be safe. You could still orbit counterspinward to the earth and smash into it at 60-odd km/s.
“How about requiring that four-momentum be conserved between entry and egress from hyperspace?” Probably the most realistic constraint. Sol is moving at roughly 220 km/s relative to galactic core, so if you went to the opposite side of the galaxy you’d have to burn off roughly 440 km/s. That’s a lot of energy, but certainly not infeasible. It still gives you the opportunity to smash into things, though. For starters, earth could easily be flying towards a planet around a different star at 90 km/s, so once a year you’ve got a window to deliver a pretty impressive kinetic payload to your enemy. Regular freight travel would likely require months of orbital maneuvering to get into a favorable position for orbital sync.
It actually gets much worse than this. FTL drives which conserve four-momentum could allow you to violate conservation of energy. All you have to do is jump into their neighborhood of a big gravity well (e.g. a neutron star, supermassive black hole), allow it to pull you to fantastic speeds, then jump to your target before hitting the surface.
I can think of an interesting corollary to this technique, which relies on the fact that gravity wells actually change the energy of light heading into or out of the well, because space near large masses is stretched out more. If you have an FTL system which works like a four-momentum-conserving portal, you can shine a laser towards a black hole or other massive gravity well, have it acquire energy from spacetime compression, enter a portal, and come out at a higher energy than it started. Then you collect the energy with an antenna. You can extract free electricity from spacetime curvature.
It gets weird, because time flows slower deep inside gravity wells. FTL drives are also by definition time machines, and FTL drives which allow you to change your position in a gravity well allow for changing how fast time flows.
I’m pretty sure these sorts of systems are not valid solutions to the equations of general relativity. ;-)
Adam Fields, on
Jumps always seem to go standstill to standstill. Does a ship in hyperspace lose all momentum before returning to regular space? Maybe you can’t hit a planet while in motion from hyperspace.
Aphyr, on
This is a graph from Riemann-bench, which uses Schadenfreude, a Clojure time-series benchmarking tool I wrote. Schadenfreude uses Incanter, which in turn uses JFreeChart.
duckie, on
Seems like a GNUPlot export in a vector format, then rendered with Inkscape for instance.
Lee Wei, on
Hi,
I would just like to know what tool(s) you used to provide the graph visualisations in your article. If it’s something like gnuplot, then what settings were there?
Thanks
ryan king, on
This article reminds me of a similar comparison between Java and Scala:
Redina: you’re right, anonymous classes means we no longer have to name our own instance of the factory. I’ve updated the post to show an example. We still need to look up, import, and extend the factory type–and the library author still needs to write the type in the first place.
When I write Java, I do try to take advantage of anonymous classes closing over final method variables as much as possible. Access to mutable values is… well, dangerous, as you note, but one does at least have a choice not to use them. ;-)
Vedang, on
Sorry about the formatting, what I wanted to point out was that it shouldn’t be pb but protobuf-decoder in the final example.
yes?
(Minor nitpick, it’s confusing if someone is trying to follow along and learn about macros)
Alex Redington, on
I like your conclusion, your exemplar macro, and overall your line of reasoning, but as a person who suffered through an age where working in LISP was even more difficult to achieve than today, when working in Java was something many of us, including me, had to swallow, I’d like to object to some of your distinctions with Java.
Primo: Java has a concept of an Anonymous Class. Generally these classes are implemented against interfaces, and you place them within the definition of some other class that will be using the Anonymous Class. These give you a (verbose, awkward, and still inferior) mechanism to defining fns inline as in Clojure.
Secundo: Java Anonymous Classes can close over their local scope, removing the necessity of all the overhead of get/set, constructor variables propogating scope, etc. However, as Anonymous Classes are able to access those variables, and not just their values, this means that the mechanisms for mutability (and reasoning about its consequences) expand rapidly when you start using Anonymous Classes which have mutating behavior.
These two points are not terribly important, but useful to keep in mind if you find yourself in the unfortunate position of having to write Java source code. Maybe Android development, for example.
Aphyr, on
As an aside, I want to note that my use of a macro here makes sense in Riemann’s context–where pipeline factories are fixed at compile time–but if I were writing an API for dynamic use (e.g. Netty) you might not have all the necessary pieces at compile time, and a macro would just get in the way. A better way to express this API, with a slight performance cost, is simply to use a function:
… where the function assumes any functions it receives are non-shared, and calls them every time getPipeline() is invoked to generate new handlers.
Since this is in a performance-critical path in Riemann’s infrastructure, I’m trying to eke out every last ounce I can, and avoiding the extra function lookup+invocation (plus some awkward type hints) is one of the things I’m trying. Plus this application of implicitly controlling/delaying expression invocation was too interesting to pass up in a post. :)
Novaradix, on
Connecting android device in ubuntu (Linux) guide.
First of all you have to login with root user.
Log in as root and create this file: /etc/udev/rules.d/51-android.rules.
Use this format to add each vendor to the file:
SUBSYSTEM==“usb”, ATTR{idVendor}==“0bb4”, MODE=“0666”, GROUP=“plugdev”
For more information please look at here:
In one sense you don’t need to worry about whether GINI is a good measure of inequality or not. What we know is GINI correlates with homicide rate. Thus, lower your GINI and you’ll lower your homicide rate.
You can solve the un-routable blocks problem by having each of the second tier load balancers send traffic to more than one block (but less than all the blocks).
I haven’t run the simulation, but it seems like that might be a good middle ground.
Ah, thanks Sean. I’ll update the post. Formal definitions are hard!
On the other hand, having a bound on the time a request can take before aborting is a way to achieve liveness, i.e. that a response happens, even if it is a failure state. A non-live system would let the request continue even if it had no chance of completing. In that case, the liveness property is “eventual service”. Aborting the request because of timeout ensures that a future resubmission has the possibility of completing successfully.
The strict definition of liveness is: for any partial execution, there exists a sequence of events/steps which will achieve the desired outcome. That does not imply something completes in a finite amount of time – that’s a safety property, because there is a specific point in time at which it can be violated. Liveness properties are violated when there is a state in which the system can never achieve the desired outcome.
Also, temporality is hard.
Cloud Foundry’s next-generation router has very little testing, and we’re hesitant to move to it. It takes the odd approach of using a (very high speed) message bus with a subscription/first responder method to do the reverse proxy vs a frequently updated, but fundamentally static in-memory hash checked via embedded Lua in Nginx.
Without knowing your infrastructure… I blindly recommend haproxy to everyone who isn’t sure what load balancer to use. I’ve used (haproxy for load balancing) -> (nginx for rewrites and static files) -> (app servers) with excellent results. The least-conns balancer and health checks built into haproxy work nicely for seamless app deployment, and support for kernel TCP stream splicing minimizes the latency cost.
Very interesting! But I think you should edit the post to be a little less misleading wrt ‘Bamboo’ and ‘Rails’. Heroku’s random routing has nothing to do with Bamboo. It affects Cedar just as much. The only difference that’s relevant here is that Cedar supports multi-threaded app servers, of which there are many that work with Rails.
Since your post stands alone without ANY talk of Bamboo/Cedar or Rails, I hope you will consider removing that stuff.
The dynamics of this routing system depend on a.) the existence of per-dyno queues and b.) the invariance of the server response distribution. Neither of those assumptions applies to a concurrent-server stack like Cedar, as far as I understand from Heroku’s posts.
DynamoDB (or Cassandra, or whatever NoSQL distributed atomic counter du jour you like), for example, has no problem returning in 50ms against pretty sizeable read/write traffic, and it seems pretty trivial (and important!) to shard pools of dynos per router rather than trying to treat dynos as a globally shared pool of resources.
In theory, the top-level routing system is globally distributed, which means you could face inter-dc latencies for consensus; at least one round trip and likely more than that. There’s a reason folks don’t split Dynamo over multiple datacenters without special rack-aware reasoning; on inhomogenous networks you can get spectacular tail latencies. The problem is that the dynamics of the system–like dyno queue depths–varies on a shorter timescale than an inter-DC system can reach consensus, which renders any consensus-based approach useless. The only solutions I can think of are
1.) Go stateless 2.) Be stateful, but only over dynamics that change slowly, like overall DC load 3.) Localize state to a system with tight latency bounds, like a single DC or machine.
In practice, your DNS system is already balancing traffic on a geographic basis, which helps you choose #3; a hybrid stateless/short-latency CP system is what I describe in the second post.
“Heroku can’t use round-robin or min-conns load balancers for their whole infrastructure–it’s just too big a problem to coordinate.”
What’s the basis for this statement? DynamoDB (or Cassandra, or whatever NoSQL distributed atomic counter du jour you like), for example, has no problem returning in 50ms against pretty sizeable read/write traffic, and it seems pretty trivial (and important!) to shard pools of dynos per router rather than trying to treat dynos as a globally shared pool of resources.
Oh. You cover this in your next blog post. Excellent. And well done. (although a bare < breaks your comment system…)
“Heroku can’t use round-robin or min-conns load balancers for their whole infrastructure–it’s just too big a problem to coordinate.”
What’s the basis for this statement? DynamoDB (or Cassandra, or whatever NoSQL distributed atomic counter du jour you like), for example, has no problem returning in <50ms against pretty sizeable read/write traffic, and it seems pretty trivial (and important!) to shard pools of dynos per router rather than trying to treat dynos as a globally shared pool of resources.
Oh. You cover this in your next blog post. Excellent. And well done.
At AppFog, we have a relatively simplistic LB/routing layer: round-robin selection across the LB to the cache/reverse-proxy, then randomized to the app instances behind it. We were just talking about changing the router code to a LRU style of routing (which wouldn’t really be very different than round-robin), but backed off on it for fear of fucking up what was already a pretty effective routing layer.
Mostly our impetus for change was that after a code-review, we reacted to the randomization with “there’s no way random could be better than something more intelligent”… but we backed off because we felt that introducing anything other than a constant-time algo to the routing pipeline was a recipe for dangerous outcomes.
It would be fun to model it this way and see which method proves better. Also, we’re on the non-bleeding-edge version of our Cloud Foundry components. Cloud Foundry’s next-generation router has very little testing, and we’re hesitant to move to it. It takes the odd approach of using a (very high speed) message bus with a subscription/first responder method to do the reverse proxy vs a frequently updated, but fundamentally static in-memory hash checked via embedded Lua in Nginx. Neither one is very confidence inspiring to me,
It looks kind of scary and we’re going to have to put it through a lot of serious testing before we release it, because at our scale we can’t handle anything slower than what we’re already working with, and theory is too hard to evaluate.
Thanks, Troy
Please label your graph axes
Kyle, Thank you for taking the time to write this post. Beautifully and thoughtfully articulated. Touching.
Very interesting! But I think you should edit the post to be a little less misleading wrt ‘Bamboo’ and ‘Rails’. Heroku’s random routing has nothing to do with Bamboo. It affects Cedar just as much. The only difference that’s relevant here is that Cedar supports multi-threaded app servers, of which there are many that work with Rails.
Since your post stands alone without ANY talk of Bamboo/Cedar or Rails, I hope you will consider removing that stuff.
Wow. What an extraordinary and straightforward explanation of a hugely complex topic.
Exactly, Andrew. Solving problems requires developing a notation and a model to understand and express them. I spend a lot of time drawing diagrams, but wherever possible I try to develop executable models of the problem; such that the explanation of the problem is the code. Cuts down on the number of internal mappings you have to keep in your head.
Bro I am gonna kick your harassers ass. I would imagine San Francisco being more accepting of people who differ from the privileged “straight white male” norm.
Fascinating stuff. Taken from another angle, one could argue that once you had a working domain model, no matter how informal, it enabled you to make the insight detailed above - right?
Thank you. In my work life I often come into contact with people from ‘marginalized’ communities that have had life experiences much different than my own. And most of the time, we meet on the human level with respect and compassion and life is good. But other times, a person will only see my skin color, my education, my class and not see that deep inside this white, overly-educated person is a sister, an ally. I have had experiences which parallel some of their experiences, but they will never know that, because they see what they want to see and never stop to ask or listen. I am equally (at times)as guilty of assuming a person’s story based on outward appearances or my own experiences. So thank you for saying what needs to be said and reminding me what I need to keep working towards - a listening heart.
Kellan at Etsy gave a talk about how they approached this issue http://firstround.com/article/How-Etsy-Grew-their-Number-of-Female-Engineers-by-500-in-One-Year
Soooo…Node.js is cancer. I feel bad for the naive idiots who followed the hype.
Momentum is only defined relative to a reference frame. If you accept special relativity’s postulate that inertial reference frames are indistinguishible, there’s no reason for a ship to match at any particular reference frame when it emerges from hyperspace.
“OK,” you say. “Our FTL drive behaves differently near gravity wells. At the end of a jump, when the ship returns to normal space, it sticks to the nearest heavy object. That’s what I mean by ‘lose all momentum’.”
General relativity is a local theory of spacetime, so any constraints we impose on this drive need to be phrased in terms of local spatial invariants. We could say that when a ship re-enters normal space, it does it in such a way that the instantaneous time differential of the local stress-energy tensor is zero, i.e. the force of gravity doesn’t change in direction or strength over time.
This still allows for spectacular relative velocities, because gravity falls off quadratically with distance. Sure, if you jump into earth orbit you’ll be stationary; but if you jump in to the system say, three days forward in earth’s orbit, all you have to do is wait and earth will smash in to you. Same deal, really.
“OK, so we constrain FTL exits such that our exit vectors put us in a stable orbit.” Worse. Now you’re allowed to pick extreme orbits. Extremely elliptical solutions to solar orbits can crash you into planets at fantastic speed.
“Circular orbits?” Not sure what physical reasoning would lead to this one, but Earth’s orbit has exceptionally low eccentricity so it would probably be safe. You could still orbit counterspinward to the earth and smash into it at 60-odd km/s.
“How about requiring that four-momentum be conserved between entry and egress from hyperspace?” Probably the most realistic constraint. Sol is moving at roughly 220 km/s relative to galactic core, so if you went to the opposite side of the galaxy you’d have to burn off roughly 440 km/s. That’s a lot of energy, but certainly not infeasible. It still gives you the opportunity to smash into things, though. For starters, earth could easily be flying towards a planet around a different star at 90 km/s, so once a year you’ve got a window to deliver a pretty impressive kinetic payload to your enemy. Regular freight travel would likely require months of orbital maneuvering to get into a favorable position for orbital sync.
It actually gets much worse than this. FTL drives which conserve four-momentum could allow you to violate conservation of energy. All you have to do is jump into their neighborhood of a big gravity well (e.g. a neutron star, supermassive black hole), allow it to pull you to fantastic speeds, then jump to your target before hitting the surface.
I can think of an interesting corollary to this technique, which relies on the fact that gravity wells actually change the energy of light heading into or out of the well, because space near large masses is stretched out more. If you have an FTL system which works like a four-momentum-conserving portal, you can shine a laser towards a black hole or other massive gravity well, have it acquire energy from spacetime compression, enter a portal, and come out at a higher energy than it started. Then you collect the energy with an antenna. You can extract free electricity from spacetime curvature.
It gets weird, because time flows slower deep inside gravity wells. FTL drives are also by definition time machines, and FTL drives which allow you to change your position in a gravity well allow for changing how fast time flows.
I’m pretty sure these sorts of systems are not valid solutions to the equations of general relativity. ;-)
Jumps always seem to go standstill to standstill. Does a ship in hyperspace lose all momentum before returning to regular space? Maybe you can’t hit a planet while in motion from hyperspace.
This is a graph from Riemann-bench, which uses Schadenfreude, a Clojure time-series benchmarking tool I wrote. Schadenfreude uses Incanter, which in turn uses JFreeChart.
Seems like a GNUPlot export in a vector format, then rendered with Inkscape for instance.
Hi,
I would just like to know what tool(s) you used to provide the graph visualisations in your article. If it’s something like gnuplot, then what settings were there?
Thanks
This article reminds me of a similar comparison between Java and Scala:
http://robey.lag.net/2011/04/30/dissolving-patterns.html
Ah, thanks for catching that, Vedang!
Redina: you’re right, anonymous classes means we no longer have to name our own instance of the factory. I’ve updated the post to show an example. We still need to look up, import, and extend the factory type–and the library author still needs to write the type in the first place.
When I write Java, I do try to take advantage of anonymous classes closing over final method variables as much as possible. Access to mutable values is… well, dangerous, as you note, but one does at least have a choice not to use them. ;-)
Sorry about the formatting, what I wanted to point out was that it shouldn’t be pb but protobuf-decoder in the final example.
the macro will actually expand to
(let protobuf-decoder (ProtobufDecoder. (Proto$Msg/getDefaultInstance)) (.addLast “integer-header-decoder” (LengthFieldBasedFrameDecoder. Integer/MAX_VALUE 0 4 0 4)) (.addLast “protobuf-decoder” protobuf-decoder)))))
yes? (Minor nitpick, it’s confusing if someone is trying to follow along and learn about macros)
I like your conclusion, your exemplar macro, and overall your line of reasoning, but as a person who suffered through an age where working in LISP was even more difficult to achieve than today, when working in Java was something many of us, including me, had to swallow, I’d like to object to some of your distinctions with Java.
Primo: Java has a concept of an Anonymous Class. Generally these classes are implemented against interfaces, and you place them within the definition of some other class that will be using the Anonymous Class. These give you a (verbose, awkward, and still inferior) mechanism to defining fns inline as in Clojure.
Secundo: Java Anonymous Classes can close over their local scope, removing the necessity of all the overhead of get/set, constructor variables propogating scope, etc. However, as Anonymous Classes are able to access those variables, and not just their values, this means that the mechanisms for mutability (and reasoning about its consequences) expand rapidly when you start using Anonymous Classes which have mutating behavior.
These two points are not terribly important, but useful to keep in mind if you find yourself in the unfortunate position of having to write Java source code. Maybe Android development, for example.
As an aside, I want to note that my use of a macro here makes sense in Riemann’s context–where pipeline factories are fixed at compile time–but if I were writing an API for dynamic use (e.g. Netty) you might not have all the necessary pieces at compile time, and a macro would just get in the way. A better way to express this API, with a slight performance cost, is simply to use a function:
(channel-pipeline-factory integer-header-decoder #(LengthFieldBasedFrameDecoder. Integer/MAX_VALUE 0 4 0 4) protobuf-decoder (ProtobufDecoder. (Proto$Msg/getDefaultInstance))… where the function assumes any functions it receives are non-shared, and calls them every time getPipeline() is invoked to generate new handlers.
Since this is in a performance-critical path in Riemann’s infrastructure, I’m trying to eke out every last ounce I can, and avoiding the extra function lookup+invocation (plus some awkward type hints) is one of the things I’m trying. Plus this application of implicitly controlling/delaying expression invocation was too interesting to pass up in a post. :)
Connecting android device in ubuntu (Linux) guide. First of all you have to login with root user.
Log in as root and create this file: /etc/udev/rules.d/51-android.rules.
Use this format to add each vendor to the file: SUBSYSTEM==“usb”, ATTR{idVendor}==“0bb4”, MODE=“0666”, GROUP=“plugdev” For more information please look at here:
http://androiddeveloperspot.blogspot.in/2013/01/usb-debugging-in-android-ubuntu.html
In one sense you don’t need to worry about whether GINI is a good measure of inequality or not. What we know is GINI correlates with homicide rate. Thus, lower your GINI and you’ll lower your homicide rate.