I thought about it for 15 mins, but couldn’t think of any mathematical tricks. I thought of lots of minor tricks, like comparing the number to the result and not adding any more multiplications if it’s over, things that would cut 10%-20% here and there, but nothing which fundamentally changes big O running time.

For reference, here’s my solution for part 2 in smalltalk. I just generated every possible permutation and tested it. Part 1 is similar, mainly I just used bit magic to avoid generating permutations.

(even if you haven’t used it, smalltalk is fairly readable, everything is left to right, except in parens)

day7p2: in
	| input | 
	
	input := in lines collect: [ :l | (l splitOn: '\:|\s' asRegex) reject: #isEmpty thenCollect: #asInteger ].
	
	^ (input select: [ :line |
		(#(1 2 3) permutationsWithRepetitionsOfSize: line size - 2) 
			anySatisfy: [ :num | (self d7addmulcat: line ops: num) = (line at: 1)]
	]) sum: #first.
d7addmulcat: nums ops: ops
	| final |
	
	final := nums at: 2.
	ops withIndexDo: [ :op :i |
		op = 1 ifTrue: [ final := final * (nums at: i + 2) ].
		op = 2 ifTrue: [ final := final + (nums at: i + 2) ].
		op = 3 ifTrue: [ final := (final asString, (nums at: i+2) asString) asInteger ]
	].

	^ final
  • ornery_chemist@mander.xyz
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    3 days ago

    Inverse concat isn’t too heavy if you implement with logs and such. Certainly still heavier than integer add/mul (or sub/div in my case), but arithmetic is usually faster than memory allocation. However, predicting the performance hit due to branching a priori is tricky on modern hardware, which implements sophisticated branch prediction and speculative execution. Furthermore, branching happens anyway between terms to select the right operation, though a misprediction there is likely less significant unless you are doing string manipulation.

    “Overall number of checks” is a bit ambiguous; if taken to mean the number of times I check against the target for early escape, plus the final check on success, the figure is 15% relative to the average 3(n-1) / 2 checks required by brute force (n = number of terms in the equation, giving n-1 operators). That’s still almost a 7-fold decrease. If we instead look at the number of operator evaluations relative to the (n-1)/2 * 3(n-1) evaluations expected from an average brute force search (3(n-1) / 2 combinations with (n-1) operations conducted per combination), the figure is only 7.0%. In both cases, there is a significant amount of work not being done.

    • morrowind@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      Interesting. I’m doing naive string cat; it would probably be way faster with just math.

      Now that I think about it more carefully, you can effectively prune whole trees of options with checking, especially for cat.

      I wonder, did you get to benchmark both approaches?

      • ornery_chemist@mander.xyz
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        That pruning is indeed goal. As for benchmarking, I did not implement a brute force solution; I might try it if I finish one of the next few days quickly (lol fat chance). I did bench math vs string cat but did not record numbers. IIRC it made no measurable difference in Clojure, where input parsing with my crappy parser/combinator lib dominated, but math was something like a factor of three faster than string cat for Chez Scheme.