To play devil’s advocate, I still see issues pop up about Python 2.x vs. 3.x sometimes. JS and the whole transpiation idea for being able to use new language features (ES6/ES7+), while being portable across environments (i.e. code is transpiled to ES5) is pretty cool.

That being said, as someone who works with JS full time (and specifically a large AngularJS 1.x app), it can definitely get overwhelming. At the current moment, it seems like building a larger, modern web app that adheres to best practices involves understanding and using different things for the view layer (React/Vue/Angular 2), CSS-in-js (styled-components, glamour, etc.), the data layer (Redux), adding in strongly-typed benefits (TypeScript), immutable data structures (Immutable.js), and then building/bundling/packaging everything (Webpack, NPM, Grunt, Gulp, etc.).

I tried to sit down and start looking at what a rewrite of our angular app would look like with a collection of these newer technologies, and it was very overwhelming…this is part of the reason why I’ve been against adding dependencies in the side projects I’ve worked on, including the teletype docs enhancements :grin:

4 Likes

I am aware of it, and have heard good things about it too, I just don’t do any web dev anymore. I do think the Haskell/ML style of separating the type signature from the function implementation to greatly aid readability. C style function declaration does not cope well with the templates and traits that C++ and Rust respectively add into the mix.

I think the environments that it’s used in don’t have the same “not invented here” ethos. I suspect due to a lack of resources more than anything. The only unicorn that comes to mind that uses it is Dropbox. From the outside, it seems that some of the churn of Javascript comes from engineers needing to make busywork. Javascript is a victim of it’s own success.

That is a fair point. The pain point was always the redefinition of the string type to be an opaque Unicode text type, rather than a byte array. You see the same issue in a lot of other pre-Uniciode languages (Haskell has 3 common string types in use, String, Text and ByteString, yuck!). I suspect history will prove them correct in making the change, but that they could have made the transition easier.

To anyone on the fence, 3.6 has enough interesting new features over 2.7 that it makes the change worthwhile. I particularly like formatting strings, and pathlib from 3.4 is really nice too.

That is a very wise decision.

A few years back I decided I want to make a printable picture-coded calendar for my pre-school daughter. It would list the days she was at nursery, weekends, birthdays, what her new Mr. Men / Little Miss Book was this week, etc. It’s been incredibly useful, particularly with understanding the concept of days and weeks.

I very briefly thought about doing it in Haskell or some such, but the main point was to get it done relatively quickly, and well, HTML/CSS/JS is an amazing canvas for creating data driven documents. I used React to do it (none of other stuff like Redux), partly to learn what is was all about as it was generating a lot of hot air at the time. Plus it also used Babel as vanilla Javascript is yucky.

I was really impressed by it. It made it really easy to quickly build small composable bits of UI. Until 6 months later when I tried updating my package.json, then it all went to crap. So now it’s stuck using 2 year old versions of everything. I do worry what I’d do if I were doing this professionally, running old dependencies is a great way to get Equifaxed.

2 Likes

Just a little aside - I’ve been writing Haskell (casually) for the last 3 months & once you realize that you don’t need to understand monads & functors to get in the door it’s actually pretty accessible, albeit with a somewhat different mental model of data & control flow. Pattern matching, ADTs and list comprehensions are sorely missed when I’m writing C day-to-day. These 3 things are much closer to the way I think about software, than the traditional imperative approach.

Now if only it would run on a microcontroller, I wouldn’t be compelled to partake in this thread!

3 Likes

I guess I was a being a bit tongue in cheek. I also write Haskell a fair bit these days too, I started learning about 2 years ago (2nd attempt). It’s now my go to programming language. I even understand what a Monad is!

I’m just saying that there is an opening for a less full-on language that also brings pattern matching and ADTs, etc. All new programming languages should include them, and as you say they behave much closer to how we think about software.

No microcontrollers… but there is CλaSH which compiles down to Verilog/VHDL. I’m not sure how mature it is, and in truth it’s really a DSL embedded in Haskell, with it’s own set of idioms.

Rust is definitely getting close to being practical with embedded hardware, and has a lot of stuff from Haskell. (Traits in Rust are like Typeclasses in Haskell.)

I also completely forgot about Swift, which may end up gaining popularity outside of the Apple ecosystem. I’m don’t know much about it, and have no idea how approachable it is as a language.

Swift is pretty easy to learn. And this coming from a guy that doesn’t know how to use a monad. :wink: no idea what non-apple use of swift is like though.

1 Like

while we’re derailing the C programming tips thread, i guess i should reveal that the next monome thing heavily uses lua.

12 Likes

You’ve probably used them more than you know. If you have a value wrapped inside of something, e.g. an optional in Swift. Then when you bind1 the inner value into a new value (via some computation). That’s a amore^H^H^H^H^H, I mean, that’s monadic.

A bit of Googling tells me that Swift has a null-coalescing operator (??), which is basically a monadic bind.

1 sometimes also called flatMap

Ooooo interesting. Never really used Lua. I do know it uses 1-based indexing, which will make the mathematicians happy.

3 Likes

you cant do that to us b

(I’m saying that tongue fully in cheek…this is great news!)

2 Likes

oh hello (20 characters for You Had Me At Lua)

1 Like

And here I’ve been studiously avoiding derailing the C programming tips thread with C++ tips.

6 Likes

I got into it a tiny bit - kind of interesting & fun! I paired it with an icestick fpga, open-source yosys toolchain, & a ‘usbstreamer’ usb->i2s soundcard. Got some basic stuff working like an attack-decay envelope follower. The really wicked feature of clash for fixed-point DSP design is the fixed-point fractional type system.

I would argue audio domain is not well-suited to the ‘pure hardware’ approach. If you ‘directly’ synthesize a moderately complex DSP algorithm in hardware, physical size (number of gates) gets huge rather rapidly. So you quickly end up running out of gates if your logic is clocked directly at a typical audio samplerate.

Say you have like 10 biquad filters in your algorithm, you have to choose between either:
a) using 10 x as many gates
b) designing aux logic which shovels biquad state for a single ‘biquad accelerator’ to & fro a RAM several times per audio frame.

Even straightforward algorithms can get pretty hairy when you play this game. Something more involved/conceptual like lines/waves!? zoinks!

It would be so cool if we could somehow use clash’s fractional types to write blackfin-optimised fixed-point DSP objects for aleph! Even after much practice, moving floating point ideas to fixed-point blackfin-ised C is laborious…

1 Like

I know nothing about FPGAs. But I’m okay with Haskell…

The above strikes me as the kind of thing that one can generalise in Haskell…

ramBacked :: (KnownNat n, RamStorable d, RamState d s)
          => (Signal a -> s d (Signal b))  -- the signal you wish to multiplex, has access to the state-like monad RamState to store and retrieve contents
          -> Signal (Vec n a)
          -> Signal (Vec n b)

The above is really rough, as it’s based on a very quick parsing of the types that Clash uses. And I haven’t made any attempt to deal with clock rates in the type signatures.

I also don’t know if Clash is able to deal with Signals that operate internally at a faster rate.

1 Like

I saw an article on HackerNews today about Lua. It was interesting learning about the whole PUC Lua vs LuaJIT thing. And also 5.1 vs 5.2 vs 5.3.

So… which version of Lua?

wow this thread has drifted pretty far :smiley:

been using 5.3 but i’m thinking of dialing back to 5.2 and freezing there, so as to allow switch to luaJIT

which sucks cause i like that 5.3 gives us Integers, finally… but in another project i saw 20x - 100x speedups with luaJIT vs PUC lua. hard to argue with that.

1 Like

I blame JavaScript. One mention and blammo, chaos ensues. :wink:

2 Likes

just a word of caution that while FPs are certainly a powerful tool, their indiscriminate use can indeed slow things down on embedded systems with limited registers/stack.

the dereference mightn’t seem like a big deal, but it’s at least one extra instruction to fetch the address before the jump, and likely a few more to save/restore existing register contents. so figure 2-5 instructions for your FP call, vs 1 to jump to a known address/offset. in a realtime inner loop this could be significant.

lots of avr32 and blackfin code around here, so this is a relevant point - skektek managed to squeeze out a critical few cycles in aleph-waves by factoring out some over-clever pointer usage!

specifically, this usage here smells off to me - saving 6 lines of code doesn’t justify it. i’d prefer a regular old switch statement with inline functions. each case is then just a conditional jump (probably a single instruction.)

in fact with hindsight, i think the usage of FPs for monome serial protocols, while it is totally fine for libmonome, was not the best choice for aleph / libavr32. (not super critical since these protocols only get used at >=1ms intervals, but still.)

(for similar reasons, embedded developers are very careful with inheritance / virtuals in c++. generally prefer templates unless code size is a huge issue.)

(as an aside, i think that almost any time FPs are called for in non-throwaway code, it is considerate to typedef them. otherwise your pointer-to-function-accepting-pointer-to-function-accepting-pointer will be totally unreadable to future maintainers.)

5 Likes

Agreed 100%. The code in question was test code that ran on a desktop, so no concerns there, but for embedded code I wouldn’t do such a thing to save code lines.

It’s a delicate balance on the embedded hardware. I’ve been resisting the inlining of code that I write, even if in the moment it’s an obvious candidate, simply because I am not comfortable with my familiarity of pressures on the flash memory.

that is a very fair point; my reference to inlining was unnecessary and a red herring. sorry!

but since we’re here, we may as well dig down on inline a bit, since there is a ton of confusion about it. i have partaken in this confusion: i first was told to use inline liberally in the 90s, as an alternative to macros, because inlining was “faster” - this not really true anymore but it is a pervasive misconception.

here are the relevant references
http://en.cppreference.com/w/c/language/inline
https://gcc.gnu.org/onlinedocs/gcc/Inline.html

in a nutshell:

  1. inline was first introduced in c++, where you often see extern inline in header files. this tells the compiler that the function may be defined in multiple compilation units (and those definitions must in fact be the same.) it is basically static for header files, and is a bit of a misnomer because it is really about linker visibility and not inlining per se.

  2. in C, you most often see inline func() {} in headers and extern inline func; in .c files.

  3. there are other subtle differences. in C, funtion-local statics from different definitions of an inline function are distinct; in C++ they are the same!

  4. in C you also often see static inline func() {} in .c files and this is probably the usage we’re both thinking of.

the gcc docs make things a little more specific (thankfully)

When a function is both inline and static, if all calls to the function are integrated into the caller, and the function’s address is never used, then the function’s own assembler code is never referenced. In this case, GCC does not actually output assembler code for the function, unless you specify the option -fkeep-inline-functions.

most of the uses of inline in my code (relevantly, aleph and libavr32) are in this context. they are always very short functions, usually wrapping a single function call for the purposes of readability. i would argue that this usage does not actually increase the code size - no ASM is emitted for the function, and each call to it in the code is simply replaced with a different, less readable function call.

that said, it’s not really a useful optimization either way since the compiler will decide on its own what to inline (thank you very much) and in addition will consider optimization flags at compile time.

so on balance i think its very rare that using inline is actually a good idea, except in specific and unusual idioms (like case 2 above). IIRC it is actually necessary in c++ for certain template specializations… but that’s “OT” :slight_smile:

TL/DR: you’re right - kids, don’t use inline unless you really need it, and you probably don’t.


BTW, i should say that i in no way intended to nitpick your code, but just wanted to weigh in (alongside others) on the benefit / drawbacks of function pointers in embedded contexts. not because you haven’t considered it, but because this thread is “tips and tricks” and has a pedagogical / dialogue function.

4 Likes

inline can be quite useful on AVR (the 8-bit) as an alternative to macros, which I try to avoid because they are confusing to debug and not generally type safe. The function call overhead on those processors is actually quite high. I often pair it with __attribute__((always_inline)) for some reason avr-gcc isn’t very good at figuring out that it should inline functions, maybe it’s because I use -Os or -O1 most of the time. I always check with avr-objdump -S to make sure things were compiled as expected. It’s a great way to find inefficient assembly code, though heavily optimized bits can look super confusing.

interesting!
but yeah, i’d try and address that with gcc flags. in order of aggressiveness:

-finline-small-functions
-finline-functions
-fearly-inlining
-findirect-inlining

i think these are all disabled by default at -Os and -O1…

the general argument (take it or leave it) being that its the compiler’s job to estimate the overhead of a function call and factor that into optimization passes.

but yeah if you want to specify per-function behavior then that’s what __attribute__((always_inline)) and __attribute__((noinline)) are for… and the attributes exist because the inline keyword doesn’t generally do what it says on the tin (as you’ve in fact pointed out!)