@tal@lemmy.today avatar

tal

@tal@lemmy.today

This profile is from a federated server and may be incomplete. Browse more on the original instance.

tal,
@tal@lemmy.today avatar

I think that that would be hard to make work from a business standpoint. Too hard to reserve enough cash to operate for 50 years; businesses can go under.

Maybe just buy the physical movie.

I mean, a Blu-Ray or whatever is popular these days movie plus a player is self-contained and will keep working as long as you don’t damage one or the other.

People still use fifty year old vinyl records.

tal,
@tal@lemmy.today avatar

If you want to do the maths, the maximum one can possibly earn in Spotify royalties is $0.003 a stream. It doesn’t add up to a living wage for most artists.

And now, to make matters far worse, starting in 2024 Spotify will stop paying anything at all for roughly two-thirds of tracks on the platform. That is any track receiving fewer than 1,000 streams over the period of a year.

Honestly, does the 1k floor matter much? Based on the above text, the most that such a track can possibly make is $3/year. It’s a safe bet that most aren’t sitting right at 999 views and the maximum revenue per track; most are probably well below that. I have a hard time seeing someone caring much about that.

I’m not saying that there isn’t possibly some kind of business model for which a track making $1/year or something this might make sense (massive numbers of cheap machine-generated tracks targeting very specific tastes, that all get a few views each). But for conventionally-produced music, I think that if you’re making a song that’s generating 50 cents or 10 cents a year or something, it’s basically not on your radar financially.

tal,
@tal@lemmy.today avatar

From the article, it sounds like this affects people with a dumb TV who use some kind of hardware gizmo that Amazon sells to be a streaming endpoint.

I think that it’s really more linked with streaming services than with smart TVs.

tal,
@tal@lemmy.today avatar

Does this play at all on a mouse and keyboard?

Sounds like no, for the base game. There are some people talking about emulating a virtual joystick here:

gog.com/…/playing_xwing_alliance_with_mouse_keybo…

pcgamingwiki.com/…/Star_Wars:_X-Wing_Alliance

A controller or joystick is required for playing the game - without it, the game will not start. XWA Upgrade Mega Patch includes Babu Frik’s Configurator which includes an option for Joystick emulation, bypassing this requirement. Still, a joystick is strongly recommended to be used.

I haven’t had a joystick since the original tie fighter.

I would assume that you can use the thumbsticks on a controller unless it really requires precision.

Also, inexpensive joysticks aren’t that much, though there aren’t a lot of games that really make use of them these days.

tal, (edited )
@tal@lemmy.today avatar

I did this with Civilization 5 and Civilization 6. As far as I know, Civilization 6 is a perfectly solid game – doesn’t have the C:S2 launch problems – but I’ve already got a bunch of DLC for 5, and I don’t feel like Civilization 6 adds enough over Civilization 5 to warrant going back and buying a bunch of content again.

I’ve got no problem with the Paradox model of “sell a base game, then keep selling content that’s worth the money”. In fact, I’m pretty happy with it. However, that doesn’t extend to “repurchase content every couple of years for $180 or so”.

tal,
@tal@lemmy.today avatar

I’m not going to buy it, because I don’t like the series or genre. Feels really shallow and repetitive to me.

But in general – say, assuming some hypothetical developer of a game that I like, sure. I’d buy a $100 DLC. Hell, lots of games have more than $100 DLC (though it’s commonly broken up into smaller chunks).

But that sword cuts two ways. The DLC that the hypothetical developer is making has to actually be worth $100 to me. And I can get a lot of really good games for considerably less than $100. Which means that whatever they’re coming out with would need to provide a great deal of really good gameplay to be competitive.

tal,
@tal@lemmy.today avatar

I don’t think that Blizzard is in Silicon Valley.

googles

Yeah.

They’ve got a location in Southern California – not Northern California, where Silicon Valley is – and another in Boston, Massachusetts, and something in Austin, Texas.

tal,
@tal@lemmy.today avatar

Apple chose to drop support for 32-bit applications in macOS 10.15 (released 2019), and since many developers have not updated their games to support 64-bit executables, some games will effectively stop functioning on macOS.

This change is required as core features in Steam rely on an embedded version of Google Chrome, which no longer functions on older versions of macOS.

Hmmmm.

It would be interesting to see a list of which games are affected.

Some games (at least on Linux, that I’ve seen, and I would assume on MacOS) that are distributed on Steam don’t actually require Steam to be present to run. Like, Caves of Qud will work fine without Steam present; they just use it to distribute and update the binaries, not for any kind of DRM or anything. So for those guys, as long as you can download the binaries via Steam (or a related app…I remember that there’s some Python program that can download from Steam using credentials) you can presumably copy them to a 32-bit guest VM.

Some games will probably actually rely on Steam, like for achievements or something. For those…If there are a substantial number of Mac games that won’t work in a 64-bit environment, I am wondering if it is possible to make a “steamlib proxy” – basically, have a 32-Mac VM, run the game in a VM, but have Steam running in a 64-bit host environment, and just relay calls to a process launched under the host environment that uses the host steamlib to talk to Steam. Valve presumably isn’t gonna set that up as a supported environment, but I wonder if that might be a viable open-source project.

EDIT: Not the Python program I was thinking of, but here’s a .NET program that downloads apps from the depots, just to demonstrate that it’s an option.

tal,
@tal@lemmy.today avatar

WINE doesn’t really isolate apps. Valve may have done some degree of compatibility work or not for the Windows Steam library, but the game running under WINE doesn’t have any real restrictions on its ability to talk to Linux software. As long as the Steam Windows library and the Linux Steam client use some form of IPC mechanism that works on both platforms, like TCP sockets, it should “just work”, same way a Windows web browser running in WINE would “just work” when talking to a Linux webserver on the same machine.

But if someone’s having to set up a 32-bit VM running a different MacOS guest OS, then they’re not gonna be able to run Steam in the guest (due to the Chrome requirements that Valve mentioned), and I don’t believe that programs using the Steam library can talk to Steam on another host normally.

tal,
@tal@lemmy.today avatar

Sure, you’d need to run an old version of MacOS in the VM.

tal,
@tal@lemmy.today avatar

I don’t use Evernote, so I don’t have a great feel for its capabilities, but my impression from the skims I’ve done in the past is that if someone is using Evernote, their workflow may not adapt directly to Markdown.

It has the ability to have paper documents (handwritten things, business cards, etc) scanned in and the system is aware of it, can use the business cards as contacts.

It’s got to-do lists. Markdown doesn’t really have a concept of that. Org-mode does, but that’s not really a standardized format like Markdown is.

It has calendar integration.

It has embedded images. From samples, Evernote seems to bill this as people using this for things like hand sketches. There are ways to embed images in some variants of Markdown, but Markdown (and associated software) isn’t really primarily aimed at mixed-media documents, and I would guess that part of the selling point of Evernote is that there’s a low bar to adding them.

It supports embedding things like Excel documents.

All that being said, I like Markdown, and for my own notes, I tend to use org-mode for things that aren’t gonna be distributed, and Markdown for things that are. But while I use them – and for my use cases, they do some things better, like having tables that recompute values in org-mode, and I can easily use source control on them – I don’t think that they’d be a great drop-in replacement for many people who use Evernote. They’d have to use a different workflow.

Markdown is great if you spend a lot of time typing text on a computer. But if you spend time jotting notes by with some sort of stylus input mechanism or on paper, interspersing them with text, putting other non-text documents with it, I don’t know if it’s the best approach.

tal,
@tal@lemmy.today avatar

I’d add that I’d like to see a couple changes to Markdown, and would like to see a “Markdown Advanced” that tries to be more like org-mode.

  • Markdown’s numbered lists are, IMHO, a mess. Markdown auto-renumbers numbered lists. Having an auto-numbering numbered list feature is a nice idea, but with the syntax used – where lists often accidentally wind up renumbered – it is, I think, not a good idea. I’ve seen a ton of people wind up with mangled quoted numbered lists that they didn’t want renumbered and approximately nobody using the syntax for auto-numbering. I think that it’d be neat to do auto-numbering with something like a leading dash, but not where existing numbers are present.

As it is:


<span style="color:#323232;">2. foo
</span><span style="color:#323232;">3. bar
</span>

becomes

  1. foo
  2. bar

EDIT: Okay, just noticed that in lemmy’s Markdown variant, the auto-renumbering apparently doesn’t occur, while it does on Reddit.

  • I think that Markdown’s use of parens in link syntax was a mistake, because parens are valid characters in a link, and using them requires escaping the URL. I think that using angle brackets or pretty much any character that isn’t used all over in URLs to delimit the URL would have been a better idea.

As it is:


<span style="color:#323232;">[The Fallout series](https://en.wikipedia.org/wiki/Fallout_%28series%29)
</span>

produces

The Fallout series

  • Markdown isn’t fully standardized. Lemmy Markdown isn’t the same as Reddit Markdown isn’t the same as pandoc Markdown. For example, in the above, list, Reddit Markdown supports embedding things like blockquotes in unnumbered list items, and Lemmy Markdown does not. kbin doesn’t even have perfectly-intercompatible syntax with Lemmy – they don’t have a common “spoiler text” syntax. You generally get something more-or-less usable as long as you don’t use some of the less-common features, but it’s really not in a form where I’d be comfortable really advocating for it for document interchange.

“Markdown Advanced”

What I’d like to also have is a “Markdown Advanced”. Today, I use org-mode as a marked-up text format that can do a lot of useful things (to-do lists as a first-order concept, calendar-integrated deadlines, inline spreadsheets that can update when values update, etc). Markdown can’t do that. But org-mode was developed for emacs, and while I understand that vim and probably some other editors have partial implementations, it was not standardized. I think that for org-mode, that’s probably a good thing – it lets the format be easily-extended. But it kills org-mode for document interchange – it’s only useful for stuff that you plan to keep to yourself, where you can ensure that you’re using the same program to read and write it. I’d like to see a marked-up text format that has these features and has a frozen, fully-specified, syntax, so that many programs can read and write it.

tal,
@tal@lemmy.today avatar

I think that part of the problem with the RTS genre in a PvE mode is that, at least insofar as I’ve played the thing, it doesn’t really lend itself to a long game life.

There are basically two ways that I’ve seen RTSes played in PvE mode.

The Campaign

You tend to have a campaign, which is really oriented around learning the concepts and units. These tend to have static maps. I enjoy those, but I don’t know how many times you could play through a campaign. Maybe one could make a dynamic map generator, but I don’t know how much it would alter gameplay from campaign to campaign – you need to achieve the kind of roguelite situation, where the changing elements actually force you to change up gameplay in interesting ways, keep the play fresh and keeps the game replayable. A lot of what keeps the campaign being interesting is new units being introduced over time, and that is kind of a one-off thing. If you had other players creating new maps, I’m not sure how much ability they’d have to add interest, once you’ve learned the existing units.

Like, for an RTS campaign to be playable for the long run, I think that you’d need to have some kind of dynamic map generation that not just looks convincing, but also scales up difficulty in interesting ways. Maybe generate a story too. Maybe some way to introduce interesting mechanics with the map (and RTS campaigns certainly have had map formats that support scriptable events). But…I haven’t seen anything that really aims at that. Against the Storm is sort of RTSy and is designed to have procedurally-generated maps, but while there’s a campaign on the overmap, the actual in-game battles don’t have much story or concept of a campaign.

The Skirmish

Then you have what I’ll call “skirmish”, because that’s the term Total Annihilation used – where you just play what is essentially a multiplayer game against the AI. That’s also PvE in a sense. The limitation there is that RTS AIs have been kind of disappointing. It doesn’t usually take too long to figure out the holes in the AI’s logic. The kind of “meta” and bluffing that exists in multiplayer isn’t something that the game AI can do. I think that one would really need to somehow change how things work to wind up with better AIs. And “better” isn’t just “stronger” – the AI can potentially do some things that a human might not be able to do, like micromanage many units effectively. It’s gotta be fun.

Maybe it’d be possible to have a generic AI engine, like the AI equivalent of what Havok is for physics, if a lot of the work here is common across games. It’d have to be pretty flexible, too, since some AI is gonna be game-specific. That’d potentially let there be more work on a per-game basis on AI.

Another possibility might be making it the norm for the AI to be decoupled from the main game and the API to be exposed, maybe even just leave the AI’s source open, and let modders and the like work on AIs. Haven’t seen developers try that that I can recall.

Maybe it’d be possible to sell AI packages as DLC. There have been a few games I can think of with “enemy AI” that was kind of modular – Rimworld has different “storytellers”. That’d let ongoing work be done on the AI.

I like playing Wargame: Red Dragon (real time tactics, not real time strategy) single-player but the AI in that game is pretty horrendous and doesn’t play at all the way a human would; I think that few people would want to play it single-player the way I do. So I hear you on that, wish that there were a way to come up with more-interesting AI behavior.

Other

I don’t know where your specific concern is when it comes to casual players. Complicated concepts? One might be the kind of heavy micro required to play RTSes well.

I think that some of that focus on expert players might have been from the Starcraft world. Blizzard intentionally made a game that required heavy micromanagement by doing things like limiting selection group sizes – I remember a developer talking about this. But that’s not necessarily the route that RTSes had to go – Total Annihilation went down a more-automated route, where units could be instructed to act more-autonomously. One could imagine RTSes that focused more on the high level, where control of individual units isn’t all that useful.

Another “expert player” constraint might be the focus on highly-optimized build orders. Like, one typically needs to have a build that snowballs, and one needs to manually build things at precisely the right time to play optimally. I can imagine maybe setting up advancement to happen automatically, and just choosing in advance which direction one wants to go; that makes managing the build queue not a micro operation. Some games (Sins of a Solar Empire) have had pretty extensive queues for building and research. The focus isn’t on remembering to build at the right times, but on choosing which direction to go.

I think that some of the question is also “what makes RTSes fun”? Like, does one enjoy playing through a long campaign? Is it that the element of bluffing or adaptation to one’s tactics or some other human behavior isn’t present in the AI? Following the “meta”? The feel of exploring a map (and if so, is the map needing to be interesting; is procedural generation practical?) Is it optimizing one’s build order? Is it forcing one to micromanage well?

tal,
@tal@lemmy.today avatar

I think that it works well for games that have very long development cycles where a lot of that development cycle is tweaking. Think of something like Dwarf Fortress.

tal, (edited )
@tal@lemmy.today avatar

“Roguelite” isn’t really a genre, but a catchall for many types of game that use some elements that are common in roguelikes, but aren’t really roguelikes.

I mean, Nova Drift and Inscryption are roguelites, but they just aren’t in the same genre of game at all. One’s an action Asteroids-like game, and the other a turn-based deckbuilder.

tal,
@tal@lemmy.today avatar

with Qud being the outlier.

Well, and they also have Rogue itself on there.

tal,
@tal@lemmy.today avatar

Black Friday…workers at a warehouse

That’d probably be a bigger deal for customer-facing workers at a brick-and-mortar retailer.

In that case, the retailer can’t make sales.

But warehouse workers going on strike won’t stop sales, will just slow deliveries. And Black Friday is really about deals for Christmas – a sort of start of the shopping season for that holiday. It’s well out from the day when Christmas deliveries will need to arrive. So I doubt that most deliveries from Black Friday sales need to be delivered urgently.

tal, (edited )
@tal@lemmy.today avatar

I wish the EU would just fuck off of the games industry

So, you’re also American.

I don’t generally think that “culture” policies like this make much sense, but it’s important to note that “promoting culture” is the subject of legislation in a number of countries outside the US to a greater degree than it is here.

Things like quotas on domestically-produced content on radio or television are common in a number of countries. Even Canada does that.

It’s common in many countries in Europe to have something like a “culture ministry”, where a portion of the executive portion of government is dedicated to setting policies associated with culture.

So while I get the whole gut “why is the government trying to legislate culture” thing, this sort of thing is not gonna be wildly unusual in a number of countries in Europe in other forms of media. That is, it’d be a little odd if this wasn’t specifically done with video games, by the norms there.

tal,
@tal@lemmy.today avatar

I haven’t played multiplayer competitive FPSes since players ran their own servers, so I’m not really up to date.

But if my understanding of the situation is correct, it seems like there’s a pretty straightforward workaround.

Have skill-based matchmaking by default. List an estimate for how long it will take for the match to be made.

Have an option for people willing to maybe be placed into a lopsided game to skip this and go into a general pot, first-come-first-served regardless of skill.

That keeps people who want an even match happy and people who don’t care and want to jump into a match happy.

tal,
@tal@lemmy.today avatar

I haven’t been following the current state of the art in competitive multiplayer FPS land. While waiting for a match to be made, are you just staring at a progress bar, or do they let players do stuff like play warm-up play on the map?

tal,
@tal@lemmy.today avatar

I don’t per se have an issue with this, but one side effect is going to be that one-off costs of things like decoder boxes are gonna probably be a single lump sum up-front fee rather than amortized over the contract.

It’s also not clear to me why, if such a restriction is a good idea, it would be specific to cable service. Cell phone providers, for example, do similar things.

tal,
@tal@lemmy.today avatar

I mean, it’s either gonna be an up-front lump fee – which this legislation would induce – or paying in the form of a higher monthly fee over the course of the contract, which is the norm now.

I’m pretty sure that consumer preference is for the monthly fee, else so many companies wouldn’t have moved to no setup fee and amortizing the costs over a period of time.

But I’m not sure that it’s actually worse for the customer to have that up-front fee. No up-front fee plus a monthly fee is like taking out a small, unsecured loan from the service provider. In general, if you can afford to avoid taking out unsecured loans, you would probably rather.

The preference for no up-front fees matters if there are competing companies, one with an up-front fee and one without. Then the company without gets all the business. But this would ensure that all the providers have up-front fees, so it isn’t a factor from a competition standpoint. Well, maybe it’s a factor to the extent that they’re competing with Netflix or similar. But it won’t disadvantage a company against other cable companies.

tal,
@tal@lemmy.today avatar

My understanding is that it’s popular with people who follow sports. The broadcast, everyone-sees-the-same-thing-at-one-time is a good match there.

tal,
@tal@lemmy.today avatar

Does using an URL shortener or similar link redirector avoid the problem?

tal,
@tal@lemmy.today avatar

Many members of the team, including a growing contingent fearful of the existential risk of more-advanced AI models, felt uncomfortable with how quickly GPT-4 had been launched and integrated widely into other products.

GPT-4 and anything similar isn’t going to pose an existential threat to humanity.

Eventually, yeah, there is probably a possibility of existential risk from AI. I don’t know where that line ultimately is, and getting an idea of that might be something important for humanity to figure out, but I am pretty confident that whatever OpenAI is presently doing isn’t it.

Same reason that Musk and his six month moratorium on AI work doesn’t make much sense. We’re not six months away from an existential threat to humanity.

I think that funding efforts to have people in the field working on the Friendly AI problem is a good idea. But that’s another story.

tal,
@tal@lemmy.today avatar

Being an existential threat is a much higher bar – that’s where humanity’s continued existence is at threat.

There are plenty of technologies that you could hypothetically put somewhere where a life might be at stake, but very few that could put humanity’s existence on the line.

TIL that there's a FOSS Tomb Raider engine that can run on a web browser (lemmy.world)

The name is OpenLara (github.com/XProger/OpenLara ) and you can try out the WebGL build directly on your web browser on: xproger.info/projects/OpenLara/ . The web version works amazingly well on my Pixel 7a with touch controls (you have to click on the “go fullscreen” button) using Firefox as a browser.

tal,
@tal@lemmy.today avatar

Web browsers are a notoriously insecure

Compared to what?

tal, (edited )
@tal@lemmy.today avatar

Okay, I have to admit that that’s leaving me a bit nonplussed. Assume for a moment that I am concerned about the security implications of running an open-source Tomb Raider engine implementation. How exactly are you proposing running this in a more-secure fashion?

If I run an executable on my platform – say, an ELF binary on Linux – then normally that binary is going to have access to do whatever I can do. That’s a superset of what code running inside a Web browser that I’m running can do.

Are you advocating for some form of isolation? If so, what?

EDIT: And I’ve got another question for you. Let’s say that you’re worried about security of browser APIs. How do you avoid this? Because if your browser is vulnerable to some exploit in its WebGL implementation, not clicking on a link explicitly labeled as going to a website that uses 3D – which is what you appear to be urging people to do – isn’t going to avoid it. Any site you browse to – including those not labeled as such – could well expose you to that vulnerability.

EDIT2: In another comment, you say that you want to trust the “kernel” instead of the browser. Okay, fine. There are a whole class of isolation mechanisms there. What mechanism are you proposing using? Remember that you are needing to give access to your 3d hardware to whatever software package is involved here, and the Linux kernel, at least, doesn’t have a mechanism for creating virtual, restricted “child” graphics devices. The closest I can think of on Linux you can get at a kernel level there would be pass-through from a VM to a dedicated graphics adapter, which probably isn’t going to be an option for most people and I have doubts about being a carefully-hardened pathway compared to browser APIs.

tal,
@tal@lemmy.today avatar

Kernel sandboxing.

That’s a class of different mechanisms. I updated my comment above. I’ll repeat the text there:

In another comment, you say that you want to trust the “kernel” instead of the browser. Okay, fine. There are a whole class of isolation mechanisms there. What mechanism are you proposing using? Remember that you are needing to give access to your 3d hardware to whatever software package is involved here, and the Linux kernel, at least, doesn’t have a mechanism for creating virtual, restricted “child” graphics devices. The closest I can think of on Linux you can get at a kernel level there would be pass-through from a VM to a dedicated graphics adapter, which probably isn’t going to be an option for most people and I have doubts about being a carefully-hardened pathway compared to browser APIs.

Which is why using the web without JavaScript is a security measurement which I strongly recommend to enable.

Virtually every website out there today uses Javascript. Lemmy uses Javascript. What makes this particular website a risk?

do you, really?

Yeah, I do. Fifteen years ago, I used NoScript, and some things broke, but it was usable; there were enough people running non-JS-capable browsers that websites had a reasonable chance of functioning. The Web generally does not function without Javascript today.

tal,
@tal@lemmy.today avatar

So what’s the best solution? You might think switching to an electric vehicle is the natural step. In fact, for short trips, an electric bike or moped might be better for you—and for the planet.

However, in an enclosed EV, you aren’t out in the weather.

I’ve spent time bike-commuting, and I live somewhere where the weather is pretty mild. But there’s a pretty big difference between being out in the wet and wind and cold when it’s raining or whatnot and being inside a dry, air-conditioned or heated cabin.

But it’s more than that—they are actually displacing four times as much demand for oil as all the world’s electric cars at present, due to their staggering uptake in China and other nations where mopeds are a common form of transport.

I mean, that’s fine, but as the article points out, that’s because China’s consumers are generally more price-sensitive and the likely alternative is a moped. If you’re gonna get a gasoline-powered moped or an electric bicycle, sure, unless the range is an issue, the e-bike is a pretty reasonable drop-in replacement.

But people in the US don’t generally commute via gasoline-powered moped. That is, they’ve already made a judgement as to the tradeoffs, and I strongly doubt that whether-or-not the vehicle has an electric or gasoline motor is going to change this.

www.census.gov/content/dam/Census/…/acs-48.pdf

I don’t know which category a moped fits into here, but looking at Table 1 on Page 2, I assume that it’d be one of the following groups:

  • 0.1% of Americans commute via motorcycle.
  • 0.5% of Americans commute via bicycle.
  • 1.0% of Americans use “other means”.

Compare to:

  • 84.8% use a car, truck, or van
  • 5.0% use public transportation

I don’t think that introducing electric motors into the mix is going to be the factor that drastically changes the above ratios.

tal,
@tal@lemmy.today avatar

That being said, it is true that there are vehicles in-between a car and a moped, including things that have enclosed cabins. But…they haven’t really taken off as a class in the US, be it for safety or other reasons.

The EU has a “quadricycle” class of vehicles:

en.wikipedia.org/…/Quadricycle_(EU_vehicle_classi…

The US equivalent is a “low speed vehicle”.

en.wikipedia.org/wiki/Low-speed_vehicle

I remember watching a Fifth Gear episode where they almost rolled an instance of these, a Citroen Ami, over in a tight turn – they apparently don’t need to conform to the same safety requirements that automobiles do. I’ll believe that there is a legitimate niche – like, in a city with a serious lack of parking, one might be able to squeeze into tight parking spots that a full-size car couldn’t. And if you’re really, really tight on funds, then one might make sense.

electrek.co/…/are-electric-micro-cars-nevs-and-ls…

That’s one of the reasons that LSVs are limited to just 25 mph (40 km/h) top speed and can only be operated on roads with speed limits of 35 mph (56 km/h). Both of these are part of the federally mandated LSV regulations and are designed to prevent these vehicles from mixing with larger full-size vehicles at higher speeds, where the result of crashes are more likely to be fatal.

But you give up the ability to travel on anything other than low-speed roads, you lose crash safety, you lose space, you lose range, a number of amenities have likely been shaved off, and that’s a lot to trade for easier parking and a lower price. I think that that makes something like a quadricycle a difficult sell to most here in competition with a used regular car. Maybe for special cases, like where you’re going to to operate them off public roads – I mean, the golf cart is successful on golf courses. And ATVs are a thing as an off-road utility vehicle on things like farms and on large lots. But I’m skeptical that electric motors are going to make LSVs a major portion of road traffic.

tal,
@tal@lemmy.today avatar

And I would feel safe claiming that most Americans who might consider bikeped commutes rule it out because it is just not practical with our sprawling, idiotic suburban model.

Cars are what permit suburban areas to be practical; it was the rise of the car (and a few related technologies to a lesser degree, like the tram) that made the suburb popular. So, yeah, I think that it’s probably fair to say that suburbs aren’t well-suited to bicycle commuting, or foot.

But in general, people can – well, in general; if you’re a farmer or something that constrains you to live away from urban areas, no, but in general – live in urban areas rather than suburban. I mean, we have cities, and there are built-up areas in those cities, and in general, if you live in the suburb of a city, you could live in the city proper.

But that’s not the choice that people have generally been making. If we expected people to want to live in an urban environment, we’d expect to see apartment and condo prices in high-density areas constantly rising. We’d expect to see population on net shifting from suburbs into cities.

googles

pewresearch.org/…/psd_05-22-18_community-type-01-…

https://www.pewresearch.org/social-trends/wp-content/uploads/sites/3/2018/05/PSD_05.22.18_community.type-01-03-.png

That shows that more people from outside the US entering the US move into an urban area than a suburban area. But inside the US – and overall – people have generally headed out of urban areas to live in suburbs.

That is, I don’t think that the problem is that planners have failed to provide what the consumer generally wants. I think that the consumer has had the option, and has decided that he wants to live in a suburb with a car.

Also, I think that there’s a question of whether this is US-specific or whether the US is just a leading indicator. My guess is that the world will likely tend to shift towards suburbs, absent some form of technological change. One tends to see urbanization globally – that is, people move out of rural areas, as a smaller portion of a developed economy is involved in agriculture. But that doesn’t mean that it’s to high-density areas; that’s inclusive of growth of suburbs:

…yale.edu/…/global-urban-growth-typified-by-subur…

To many people, the term “urban growth” connotes shiny new high-rise buildings or towering skyscrapers. But in a new analysis of 478 cities with populations of more than 1 million people, researchers at the Yale School of Forestry & Environmental Studies (F&ES) found urban growth is seldom typified by such “upward” growth. Instead, the predominant pattern in cities across the world is outward expansion: Think suburbs instead of skyscrapers.

The article does mention India’s zoning restrictions:

In contrast, in places where populations are growing but zoning is sometimes restrictive (India)…

I’ve read before about problematic Indian zoning laws that restrict heights of construction in Indian cities; that might legitimately be a case where people are kept from living in higher-density areas despite wanting to do so. But I’m skeptical that that is a dominant factor globally. If one removed height restrictions on construction in some cities – take London, for example, where one has line-of-sight restrictions – I can certainly believe that prices in the built-up areas would drop somewhat, and a greater portion of people would live in the city proper than is the case today. Fine, that probably makes sense. But are height restrictions the dominant reason that people don’t choose to live in urban areas? Chicago has relatively non-restrictive height regulations, but it’s seen outflow too. This article discusses it and finds a small amount of growth right in downtown, a lot of growth in suburbs and exurbs, and population loss in the area in between:

newgeography.com/…/003560-chicago-outer-suburban-…

The story was much different outside the core area. The balance of the city, where 93 percent of the people live, lost 250,000 residents – a loss greater than that of any municipality in the nation over the period – including Detroit. The losses were pervasive. More than 80 percent of the city’s 77 community areas located outside the core lost population.

Thus, the core area boom is far more than negated by the losses in the balance of the city. The losses that were sustained in the area between the urban core and the outer suburbs and exurbs were virtually all in the city itself.

The overwhelming reality of metropolitan growth in Chicago, however, is that the outer suburbs and exurbs continue to capture virtually all growth. Overall, areas outside 20 miles from the core of Chicago gained 573,000 residents between 2000 and 2010. By contrast, the entire metropolitan area gained only 362,000 residents. As a result, these outer suburbs and exurbs accounted for 158% of the Chicago metropolitan area’s population growth between 2000 and 2010. The core gains, city and inner suburban losses are illustrated in Figure 3.

That doesn’t really look like what one would expect if people were really intent on living in higher-density areas.

tal,
@tal@lemmy.today avatar

Personally I can say the only reason I don’t ride my e-bike more for daily use is due to the rampancy of bike theives and vandals. Shit is genuinely getting hard to deal with and I don’t have time or money to put up with it.

I remember a YouTube video someone did in New York City where they simulated stealing a bike using various increasingly-slow and obvious methods. Started with a pair of bolt cutters and went through a few others, including an angle grinder.

It culminated with them using a hammer and chisel to slowly carve their way through a bike lock chain. Someone stopped to help, suggested that they hold the chain differently. A NYPD cruiser stopped, asked them to move out of the street because it was on the edge of the sidewalk and they were lying in an active lane of the street, and then moved on.

I think that as long as something is light enough to be placed into a van and is stored in the open, if crime is an issue in the area, it’s probably going to either need to be really cheap – so not worth stealing – or have sophisticated measures to deter it, like requiring registration or maybe smartphone-style components that require cryptographic authentication and can’t be “reset” without the owner being involved.

tal,
@tal@lemmy.today avatar

and while you might be able to argue that e-bikes somehow aren’t electric vehicles because they’re partially human-powered, anyone who thinks a moped isn’t one can sod off. They are fully motor-driven.

While I’ve seen people use “moped” and “motor scooter” interchangeably, that’s really a shift in terminology; a “moped” is originally and still can be a “motorized” vehicle that can also be “pedaled”. Now, I don’t know how often people actually pedal even with pedalable ones, but…

en.wikipedia.org/wiki/Moped

All of the example images there are vehicles that can be pedaled.

tal, (edited )
@tal@lemmy.today avatar

I don’t disagree that often an early release can really kill a game. I think that Fallout 76 would have done much better had it not gone out the door for a while, and I think that the poor quality at release really hurt reception; despite Bethesda putting a lot of post-release work into the game, a lot of people aren’t going to go back and look at it. CDPR and Cyberpunk 2077 might have done better by spending more time or deciding to cut the scope earlier in development too. But, a few points:

  • First, game dev is not free. The QA folks, the programmers, all that – they are getting paid. Someone has to come up with money to pay for that. When someone says “it needs more time”, they’re also saying “someone needs to put more money in”.
  • Second, time is money. If I invest $1 and expect to get $2 back, when I get that $2 matters a lot. If it’s in a year, that’s a really good deal. If it’s in 20 years (adjusting for inflation), that’s a really bad deal – you have a ton of lower-risk things than you could do in that time. Now, we generally aren’t waiting 20 years, but it’s true that each additional month until there is revenue does cut into the return. That’s partly why game publishers like preorders – it’s not just because it transfers risk of the game sucking from them to the customers, but also because money sooner is worth more.
  • Third, I think that there are also legitimate times when a game’s development is mismanaged, and even if it makes the publisher the bad guy, sometimes they have to be in a position of saying “this is where we draw the line”. Some games have dev processes that just go badly. Take, say, Star Citizen. I realize that there are still some people who are still convinced that Star Citizen is gonna meet all their dreams, but for the sake of discussion, let’s assume that it isn’t, that development on the game has been significantly mismanaged. There is no publisher in charge of the cash flow, no one party to say “This has blown way past many deadlines. You need to focus on cutting what needs to be cut and getting something out the door. No more pushing back deadlines and taking more cash; if the game does well, you can do DLC or a sequel.”

EDIT: I think that in the case of Cities: Skylines 2, sure, you can probably improve things with dev time. But I also think that the developer probably could have legitimately looked at where things were and said “okay, we gotta start cutting/making tradeoffs” earlier in the process. Like, maybe it doesn’t look as pretty to ship with reduced graphical defaults, but maybe that’s just what should have been done. Speaking for myself, I don’t care that much about ground-level views or simulated individuals in a city-builder game, and that’s a lot of where they ran into problems – they’re spending a lot of resources and taking on a lot of risk for something that I just don’t think is all that core to a city-builder game. I think that a lot of the development effort and problems could have been avoided had the developer decided earlier-on that they didn’t need to have the flashiest city sim ever.

Sometimes a portion of the game just isn’t done and you might be better-off without it. Bungie has had developers comment that maybe they shouldn’t have shipped with The Library level in Halo. My understanding is that some of the reason that different portions of the level look similar is that originally, the level was intended to be more open, and they couldn’t make it perform acceptably that way and had to close off areas from each other. I didn’t dislike as much as some other people, but maybe it would have been better not to ship it, or to significantly reduce the scope of the level.

I mean, given an infinite amount of dev time and resources, and competent project management, you can fix just about everything. Some dev timelines are unrealistic, and sometimes a game can be greatly-improved with a relatively-small amount of time. My point is that sometimes the answer is that you gotta cut, gotta start cutting earlier, and then rely on a solid release and putting whatever else you wanted to do into DLC or maybe a sequel.

I won’t lie: That’s the kind of talk that really makes me wish Valve would quit playing around with Steam and weird hardware experiments, and go back to making new games.

I don’t agree at all. There’s one Valve and Steam. If it’s not Valve, it’s gonna be Microsoft or someone, and I’d much rather have Valve handling the PC game storefront than Microsoft. There are lots of game developers and publishers out there that could develop a game competently, but not many in Valve’s position.

tal, (edited )
@tal@lemmy.today avatar

Valve can’t count to 3 though.

Capcom had years of jokes on exactly that point with the Street Fighter series, but they eventually did release Street Fighter III.

EDIT: For those not familiar, here’s the relevant portion of the series timeline:

  • Street Fighter
  • Street Fighter II: The World Warrior
  • Street Fighter II: Championship Edition
  • Street Fighter II: Hyper Fighting
  • Super Street Fighter II: The New Challengers
  • Super Street Fighter II Turbo
  • Street Fighter Alpha
  • Street Fighter: The Movie (the video game)
  • Street Fighter Alpha 2
  • X-Men vs. Street Fighter
  • Street Fighter EX
  • Street Fighter III: New Generation*
tal,
@tal@lemmy.today avatar

It’s a lot better, but it’s not Fallout 5, which is what I think a lot of people – including myself – actually wanted.

If you wanted to play a game in the Fallout universe with some of your friends or your spouse or something, then, yeah, I can see Fallout 76 being a legitimate fit.

But Bethesda built up a fan base around a franchise that liked playing an immersive, story-oriented, highly-moddable game where the main character is kind of core to the story. They moved to a genre where xxPussySlayer69xx is jetpacking around, the story couldn’t matter much past the initial part of the game (since the point of the online portion is to have people replaying relatively-cheap-to-produce content), that couldn’t be modded much (to keep balance and players from cheating), and where the player’s character cannot matter much, because there are many player characters.

They did make some things that I’d call improvements, like shifting away from PvP (the Fallout 76 playerbase has not shown a lot of enthusiasm for it) and reducing the emphasis on survival mechanics (it turns out that focusing a lot on gathering food and water can kind of detract from playing the rest of the game if you have limited time to play with other people).

But Fallout 76 just fundamentally cannot be Fallout 5, because it’s aimed at online play, replaying the same events over and over. It can be a lot better at being an online-oriented Fallout-themed game than Fallout 76 was at release, and they did that.

People complaining about, say, the lack of human NPCs in the initial release are complaining that they want that kind of single-player-oriented game. Bethesda put some in, true enough, shifted things a little towards earlier games in the series. But they have not and were not going to convert the game into Fallout 5.

There have been franchises that have spanned multiple video game genres. Think of, say, Star Wars. But I’m not sure how often there are long-running video game franchises that shift to other genres successfully. If Capcom decided to make a 4X Mega Man game, or a dating sim Mega Man game, I’m not sure that things would go well.

Granted, Fallout 76 is closer to earlier 3D Fallout games than a hypothetical Mega Man dating sim would be. But I think that there are some important, not immediately-obvious divergences from what made the series popular.

tal,
@tal@lemmy.today avatar

Also, while some genres can be fixed after release, some can’t because they aren’t very replayable.

A number of adventure games, for example – you’re probably not going to play through them many times. If you blow the initial release, you kind of blew the experience.

tal,
@tal@lemmy.today avatar

Some of my favorite Early Access games, I’d actually rather just finish development and then start on a new release.

Take Nova Drift and Caves of Qud. Both games, I think, are in a state where I have gotten my money’s worth out of them many times over. But they’re still Early Access.

But, hey, as a player, who is going to complain about more stuff being provided for free?

At this point, my preference would be to say “Okay, you did a good job with the resources you had. Now, I would like to give you more money and you can hire more people and produce content at a higher rate, because I really like the stuff you make.”

Or at least DLC or something. Like, I don’t have a problem with blocky pixel art as a way of reducing dev costs. I think that many traditional roguelikes have benefited from just using text – means that gameplay revisions are easier, and that one doesn’t need an art team. I think that it’s an effective tactic. But having seen how much art has added to, say, Cataclysm: Dark Days Ahead, I’d like to be able to purchase high-resolution art for Caves of Qud. I pay for tons of art in many, many games that I enjoy much less than Caves of Qud. Ditto for a number of other pixel-art indie releases that I like.

I’d like to see more content coming out at a higher rate, and that is gonna require funds.

Paradox does this. They have a deal where they make a game and if I like it, I can send them more money and they will make more game at a pretty good clip. Now, maybe not everyone wants to spend what some Paradox games run if you take into account all DLC – okay – but I’m not left in a situation where I want more of Game X but I’m unable to buy it.

tal,
@tal@lemmy.today avatar

IIRC, though, that isn’t “give developer some more money and keep plugging”. It was “take the game in its current state, hand it to another developer to get it into a releasable state, and ship it”.

googles

Yeah. Basically, 3D Realms just kept kicking the can down the road. Gearbox took over, cleaned up what was there, and shipped it in half a year. It wasn’t the perfect, ideal 3D FPS, but I suspect that cleaning up what was there and making what return was possible (and at least getting the people who had preordered the game many years back) was probably the right move. I don’t think that 3D Realms was going to produce a huge success if they had another two years or something. It probably would have been a good idea to have wrapped up the project several years earlier than was the case.

en.wikipedia.org/wiki/Duke_Nukem_Forever

In 1996, 3D Realms released Duke Nukem 3D. Set apart from other first-person shooter games by its adult humor and interactive world, it received positive reviews and sold around 3.5 million copies.[8] 3D Realms co-founder George Broussard announced the sequel, Duke Nukem Forever, on April 27, 1997,[9] which he expected to be released by Christmas 1998. The game was widely anticipated.[8] Scott Miller, 3D Realms’ co-founder, felt the Duke Nukem franchise would last for decades across many iterations, like James Bond or Mario.[8] Broussard and Miller funded Duke Nukem Forever using the profits from Duke Nukem 3D and other games. They gave the marketing and publishing rights to GT Interactive, taking only a $400,000 advance.[8] 3D Realms also began developing a 2D version of Duke Nukem Forever, which was canceled due to the rising popularity of 3D games.[10]

Rather than create a new game engine, 3D Realms began development using Id Software’s Quake II engine.[8] They demonstrated the first Duke Nukem Forever trailer at the E3 convention in May 1998. Critics were impressed by its cinematic presentation and action scenes, with combat on a moving truck.[8] According to staff, Broussard became obsessed with incorporating new technology and features from competing games and could not bear for Duke Nukem Forever to be perceived as outdated.[8] Weeks after E3, he announced that 3D Realms had switched to Unreal Engine, a new engine with better rendering capabilities for large spaces, requiring a reboot of the project.[8] In 1999, they switched engines again, to a newer version of Unreal Engine.[8]

By 2000, Duke Nukem Forever was still far from complete. A developer who joined that year described it as a series of chaotic tech demos, and the staff felt that Broussard had no fixed idea of what the final game would be.[8] As the success of Duke Nukem 3D meant that 3D Realms did not require external funding, they lacked deadlines or financial pressure that could have driven the project. Broussard became defiant in response to questions from fans and journalists, saying it would be released “when it’s done”.[8] In December 2000, the rights to publish Duke Nukem Forever were purchased by Take-Two Interactive, which hoped to release it the following year.[11] By 2001, Duke Nukem Forever was being cited as a high-profile case of vaporware, and Wired gave it the “vaporware of the year” award.[12]

At E3 2001, 3D Realms released another trailer, the first public view of Duke Nukem Forever in three years. It received a positive response, and the team was elated, feeling that they were ahead of their competitors. However, Broussard still failed to present a vision for a final product. One employee felt that Miller and Broussard were developing “with a 1995 mentality”, with a team much smaller than other major games of the time. By 2003, only 18 people were working on Duke Nukem Forever full time.[8] In a 2006 presentation, Broussard told a journalist the team had “fucked up” and had restarted development.[8] By August 2006, around half the team had left, frustrated by the lack of progress.[8]

According to Miller, the Canadian studio Digital Extremes was willing to take over the project in 2004, but the proposal was rejected by others at 3D Realms. Miller later described this as a “fatal suicide shot”.[13] In 2007, 3D Realms hired Raphael van Lierop as the new creative director. He was impressed by the game and felt it could be finished within a year, but Broussard disagreed.[8] 3D Realms hired aggressively to expand the team to about 35 people. Brian Hook, the new creative lead, became the first employee to push back against Broussard.[8] In 2009, with 3D Realms having exhausted its capital, Miller and Broussard asked Take-Two for $6 million to finish the game.[8] After no agreement was reached, Broussard and Miller laid off the team and ceased development.[8] However, a small team of ex-employees, which would later become Triptych Games, continued developing the game from their homes.[14]

In September 2010, Gearbox Software announced that it had bought the Duke Nukem intellectual property from 3D Realms and would continue development of Duke Nukem Forever.[15] The Gearbox team included several members of the 3D Realms team, but not Broussard.[15] On May 24, 2011, Gearbox announced that Duke Nukem Forever had “gone gold” after 15 years.[16] It holds the Guinness world record for the longest development for a video game, at 14 years and 44 days,[17] though this period was exceeded in 2022 by Beyond Good and Evil 2.[18]

In 2022, Miller released a blog post on the Apogee website about 3D Realms’ failure to complete Duke Nukem Forever. He attributed it to three major factors: understaffing, repeated engine changes and a lack of planning.[13] On Twitter, Broussard responded that Miller’s claims were “nonsense”, described him as manipulative and narcissistic, and accused him of blaming others. He blamed Miller for the loss of 3D Realms and the Duke Nukem intellectual property.[13]

I think that one key phrase there might be important: “As the success of Duke Nukem 3D meant that 3D Realms did not require external funding, they lacked deadlines or financial pressure that could have driven the project.” Like, this is maybe a good example of where they really did need someone outside the project to say “I need you to get milestones and a schedule in shape”, and where more money and time isn’t the right answer. It’s not that the project is on the cusp of amazing success and the people managing the project just mis-estimated the schedule by several months. It’s that they just aren’t anywhere near where they want to be and don’t have a realistic roadmap for getting there.

tal,
@tal@lemmy.today avatar

Specifically with the Fallout series, I think that one complication is that there was a lot of unhappiness way back when with the series moving from a much-liked isometric, turn-based/real-time game to a 3D game with shooter elements. A lot of people, including myself, didn’t think that it would likely reproduce what they liked about the series. And, well, it was a change, but what ultimately came out was pretty good, and while I’m sure that it didn’t cut it for some people – you had things like the Wasteland series continuing the isometric approach – I think that it was a pretty decent transition. The same people who liked the isometric games generally liked Fallout 3 and Fallout: New Vegas. So in that case, the game series was taken through a major shift that a number of players were skeptical about, and it generally worked.

But with Fallout 76, I think that the transition caused tradeoffs that didn’t work out as well for many players.

tal,
@tal@lemmy.today avatar

There were also revisions of even game cartridges for consoles.

I remember having a first revision of the first Legend of Zelda for the Game Boy. A bug meant that hitting a particular button combination (Select or Start+Select, can’t recall) precisely when crossing a screen boundry would let you cross two screens rather than one.

That was patched in a later revision of the cartridge.

tal,
@tal@lemmy.today avatar

Most people don’t have VR gear, and it’s a VR-only game.

tal,
@tal@lemmy.today avatar

I am dubious that one needs a product involved with a game engine for most game-applicable uses of AI. Some of the things listed could be useful, but I’d think that it’d make more sense to use an engine-agnostic tool and import it.

I can believe that there are useful tools for, say, pixel art. But it’s not clear to me that those need to be coming from a single company.

tal,
@tal@lemmy.today avatar

A couple that I’d like to see:

  • Realistic naval fleet combat sims. There’s not a lot out there. I assume that there’s probably limited demand – flying fighter planes seems to be a lot more popular when it comes to military sims. Rule the Waves does keep seeing releases, but it’s not a genre with many decent entrants.
  • Kenshi-style games. I’m not sure that there is a name for the genre, but sandbox, open-world, squad-based combat with a base-building and economic side.
tal,
@tal@lemmy.today avatar

If you haven’t looked recently, you might take another look.

I felt the same way when Slay the Spire came out in 2019 – not a lot of similar games at the time, and I couldn’t figure out why more developers hadn’t made similar games, as it seemed like a very good match for indie studios. But there have been a whole lot of games that came out since then.

Searching Steam for games tagged as single-player and deckbuilder, and sorting by user review

I get over 600 hits, almost all of which came out in the past three years. I’d say that single-player deckbuilders – and note that I’m assuming that you’re talking about deckbuilder games, not, say, solitaire implementations or similar, as I think that there are pretty good entrants there – are actually doing pretty well.

tal,
@tal@lemmy.today avatar

Searching Steam for games tagged as lovecraft and horror and sorting by user review gives me about 500 entries.

I think that Lovecraft’s setting is actually virtually the only fictional setting where you’re spoiled for choice, because Lovecraft permitted other people to use his setting. Like, you only get to do a Star Wars game if Lucasarts licenses it, because they leverage their copyright on the setting. Most people and companies who create a setting don’t allow other people to freely use it, and copyright law permits them to make that restriction. But Lovecraft was unusual in that he specifically encouraged other people to build on his world.

Maybe Robin Hood or a small handful of others from history, like Greek or Norse mythology, that developed before copyright law had really become the norm.

I dunno. Maybe there should be some kind of Creative Commons license that permits use of setting and maybe characters, while still keeping an individual work copyrighted, to encourage creation of collaboratively-developed settings like that.

This could mix with other genres as well like survival and potentially rogue-like stuff.

One of the top entries I see on Steam – though I’ve never played it – is an Overwhelmingly Positive-rated game, Disfigure, that appears to be a Lovecraftian action roguelike that just came out a couple of months ago.

store.steampowered.com/app/2083160/Disfigure/

EDIT: Well, hmm. Someone tagged it as Lovecraftian, but the author doesn’t really describe it that way. Just creepy.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • fightinggames
  • All magazines