Skip to main content

Posts

Showing posts with the label Bernhard Suter

I just turned on the bot to re-post a subset of post from the takeoutarchive to diaspora over the next year, at the...

I just turned on the bot to re-post a subset of post from the takeout archive to diaspora over the next year, at the anniversary date of the post, 3 hours apart and at minute 42: https://diasporing.ch/tags/gplusarchive It's a bit weird to share an account with a bot, but I get to comment on posts of my former self with the benefit of hindsight... https://diasporing.ch/tags/gplusarchive

Slightly OT: if you are in it for the long run, why it makes sense for users to bet on open-source and...

Slightly OT: if you are in it for the long run, why it makes sense for users to bet on open-source and open-standards: Because in the long run, proprietary, closed-source solutions tend to become evolutionary dead-ends. Which might also be one of the reasons for having a serious look at the open-source social-media platforms (e.g. https://fediverse.party/ ). They may not look as hip and evolve as fast as their proprietary commercial competitors, but might still be around in a decade or so. https://blog.kugelfish.com/2014/10/why-open-source-software-works.html

Not that I am a big fan of Diaspora* so far - it certainly feels like a few steps backwards in terms of...

Not that I am a big fan of Diaspora* so far - it certainly feels like a few steps backwards in terms of user-experience and polish, in particular for photo presentation and convenient photo upload directly from a mobile device. But then I always had a soft spot for the underdog and I am looking forward to moving from the ghost-town to a ghost-village and hopefully some more interesting people will end up moving there too. Originally shared by Bernhard Suter My new primary social-media account is now kugelfish@diasporing.ch https://diasporing.ch/people/f1b0fdf0b1710136466f7a163e59d8f4 BTW, if you ever wanted an off-shore social media account in Switzerland, here is your chance: https://diasporing.ch #signalflare #diaspora #plexodus

Has anybody tried to use something like Zip Extractor...

Has anybody tried to use something like Zip Extractor ( https://chrome.google.com/webstore/detail/zip-extractor/mmfcakoljjhncfphlflcedhgogfhpbcd ) to expand a G+ posts takeout archive into drive and see whether the html content can be displayed from there? https://chrome.google.com/webstore/detail/zip-extractor/mmfcakoljjhncfphlflcedhgogfhpbcd

First successful automated reposts of converted G+ takeout archive content to my diaspora account...

First successful automated reposts of converted G+ takeout archive content to my diaspora account (kugelfish@diasporing.ch): - link sharing example: https://diasporing.ch/posts/fd3c0cb0e2c50136ed0b7a163e59d8f4 - photo sharing: https://diasporing.ch/posts/2603692 - cats: https://diasporing.ch/posts/201826e0e2c30136ed0d7a163e59d8f4 https://diasporing.ch/posts/fd3c0cb0e2c50136ed0b7a163e59d8f4

It might also be worthwhile to start thinking about what people would actually want to do with their G+ takeout data.

It might also be worthwhile to start thinking about what people would actually want to do with their G+ takeout data. Given that there are huge mismatches in capability and features between platforms any data transformation will likely come at a loss based on what the lowest common denominator between these platforms is. And then there are also the pesky questions of copyright, ownership and control. The simplest use-case would be a static archive of some part of the G+ experience, reproducing semantic and/or visual structure of the data on the G+ site today. At the other end of the spectrum would be a live transfer on a new transfer with entity remapping (user identity, posts timeline and interactions) and with full referential integrity. I don't know if anybody would want to be ambitious enough to attempt something like that. What I am planning to do is somewhere in the middle. Take only a small subset of the data (my own identity & post data) and transform it to create new,...

Nice library which can be used for converting html formatted content strings in JSON takeout archive to markdown:

Nice library which can be used for converting html formatted content strings in JSON takeout archive to markdown: #!/usr/bin/python import json import sys import html2text converter = html2text.HTML2Text() converter.ignore_links = True for filename in sys.argv[1:]: post = json.load(open(filename)) print ('%s :' % (filename, )) if 'content' in post: print(converter.handle(post['content'])) https://github.com/aaronsw/html2text
”go"