> If you knew which bike model I was googling yesterday, almost all of these guesses might have been more accurate.
I think this sort of guessing is intended to be combined with additional data the marketers already have, like purchase history, location, social media posts, and so on. Basically the VLM output is treated as another data point rather than the sole source, or the existing data could be fed into the model's prompt before reading the image.
That makes sense and is a fair point. And maybe the guesses from the picture would be more revealing if they had my purchase history as context. But I wanted to make the point that the last 10 web pages that I visited would probably allow you to make much more accurate predictions about me that would partly contradict the guesses the model made.
IMO they should sell appropriately priced licenses that allow the use of more VMs. Make the licenses expensive enough so that it doesn't eat into hardware sales, or explicitly prohibit VDI/virtual seats in the license agreement.
Currently services like Github Actions painfully and inefficiently rack thousands of Mac Minis and run 2 VMs on each to stay within the limits. They probably wouldn't mind paying a fee to run more VMs on Mac Studios instead.
Another +1 for this one as this is what turns this tool from a toy environment with basic sketches into something that's actually useful for larger projects with a full toolchain, libraries, and so forth.
A lot of simulators stop at simple sketches, but the goal with Velxio is to support more realistic workflows , multiple boards interacting, real toolchains, and more complex setups
Still early, but definitely moving in that direction
I've had reasonably good results digitizing VHSC home video with a composite to HDMI converter/scaler followed by an HDMI capture card. The converter does TBC and deinterlacing and I find the resulting footage to be much more clear and stable than what you get out of a regular composite to USB dongle.
If you have an AVR with composite or s-video in and HDMI out that could also work in place of the converter. In either case you'll downscale the footage back to 640x480 before encoding.
You have to monitor the process start to finish if the tapes are bad, there's nothing around that.
For MiniDV and Digital8 you should straight up get a lossless copy using a cheap Firewire card.
I archived all my MiniDV tapes using a cheap firewire card and dvgrab on Linux, it can be set to automatically split noncontinous clips into different files for easy viewing. It's very straightforward to use and can be done unattended.
Just thinking back 10 years ago when I was arching all my DV tapes on my Dad's old G5... I did it all by hand through Final Cut Express. It would've been sooo much easier had I known about dvgrab back then!
Many distros (including Raspberry Pi OS) don't enable `CONFIG_FIREWIRE_OHCI` in the kernel, so support isn't built-in, unless you build your own kernel.
Right, that matches my understanding. After 2029, It'll stick around as long as it continues to compile. If it fails to compile it would get dropped instead of updated as there's no maintainer.
This was around 2020 or 2021. I had an old laptop with a firewire port which was already running Ubuntu. I couldn't make it work. That's when I found that the support was removed from the kernel, and that's what led me to Linux Mint. I bought a new SSD and installed Linux Mint, and I was able to import my video tapes with no further issue.
An Ubuntu support page says eth1394 has been removed from the kernel since version 2.6.22.
Edit: This was a VERY old laptop. I think it has a 32 bit processor. Maybe that confounded the issue.
> An Ubuntu support page says eth1394 has been removed from the kernel since version 2.6.22.
that doesn't really mean what you think it means, since they removed that module to replace it with a more standard module. and in addition, the presence or lack of eth1394 wouldn't affect a camera or fire interface in any meaningful way
This isn't really what you're asking for but is virtualization possible on the client side? Either through direct virtualization on the client PC or using VDI. Basically IE and Windows with admin rights would run in a restricted VM devoted solely to that app, with the VM restricted from network access outside of connections to the legacy server and any management/etc. requirements.
This would incur an added cost in licensing and possibly hardware but this would also be the cleanest way to do it. Also on the security side this would be safer than escalating a legacy ActiveX app on the secure client.
Having multiple instances of IE running remotely on Windows Server and then served using Citrix or something similar should work as well if you don't need full VM isolation between clients, and I've seen this used in real companies with legacy apps that can't run on the standard employee machines. Again though this has a licensing cost.
I remember a case where a company decided to assign employees random 16 character passwords with symbols and rotated them every 90 days or so. They were unchangeable and the idea was that everyone would be forced to use a secure password that changed regularly.
You can probably guess what happened, and that was that no one remembered their passwords and people wrote it down on their pads or sticky notes instead.
Writing down a password is a great option. However you need to keep that paper in a secure location. Put it in your wallet and treat it like a $100 bill - don't paste it to a monitor or under the keyboard.
A password manager is better for most things, but you need to unlock the password manager somehow.
I mean the writing's on the wall, they just don't want to do it all at once to avoid backlash. I wouldn't be surprised if they kill sideloading completely several years down the road.
Thanks for this writeup as I haven't had time to review the video yet :)
So, the only way to manipulate it is to actually screw with the internals of the CPU itself by "glitching", meaning tampering with the power supply to the chip at exactly the right moment to corrupt the state of the internal electronics. Glitching a processor has semi-random effects and you don't control what happens exactly, but sometimes you can get lucky and the CPU will skip instructions. By creating a device that reboots the machine over and over again, glitching each time, you can wait until one of those attempts gets lucky and makes a tiny mistake in the execution process.
Considering that the PSP is a small ARM processor that presumably takes up little die space, would it make sense for it to them employ TMR with three units in lockstep to detect these glitches? I really doubt that power supply tampering would cause the exact same effect in all three processors (especially if there are differences in their power circuitry to make this harder) and any disrepancies would be caught by the system.
The Nintendo switch 2 uses DCLS (Dual-core lockstep) in the BPMP and PSC (PSC is PSP-like but RISC-V). So yes, it helps - I'm unsure if/where msft uses it on their products.
DCLS actually makes sense for this scenario as the fault tolerance gained from having three processors isn't needed here. The system can halt when there's a mismatch, it doesn't have to perform a vote and continue running if 2 of 3 are getting the same result.
Also I just thought of this but it should be possible to design a chip where the second processor runs a couple cycles behind the first one, with all the inputs and outputs stashed in fifos. This would basically make any power glitches affect the two CPUs differently and any disrepancies would be easily detected.
Piezo mics are pretty cheap, and if wired up to the microphone input of a computer or phone you could probably get better accuracy as well if you used the same signal processing techniques.
I think this sort of guessing is intended to be combined with additional data the marketers already have, like purchase history, location, social media posts, and so on. Basically the VLM output is treated as another data point rather than the sole source, or the existing data could be fed into the model's prompt before reading the image.
reply