Discussion:
[GoLugTech] SuSE's "CentOS" is here -- Fwd: [opensuse-announce] openSUSE Leap 42.1 is RELEASED
Bryan J Smith
2015-11-05 15:57:09 UTC
Permalink
If you haven't heard, the OpenSuSE Project has released it's own
"Enterprise Linux Rebuild" using SuSE Linux Enterprise sources --
OpenSuSE Leap -- essentially SuSE's own "CentOS."

Until the Novell purchase, SuSE did _not_ release Source RPMs for its
SLE Server (SLES) and Desktop (SLED) products. Novell reversed that
shortly after purchase, and even open sourced many tools (short of the
"crown jewels" of Novell itself, not quiet going 100% open source).
But no project bothered to rebuild SLES/SLED from SRPMs. But that now
changes with OpenSuSE Leap.

The project is new, and starts at version 42 (the answer to
everything), but they are going to try to mix in select, different
Upstream packages to keep a few things more current. The Fedora
Project's Extra Packages for Enterprise Linux (EPEL) does similarly,
but only for non-core RHEL add-ons, not the core/base channel. Red
Hat does rebase some select packages regularly, usually desktop
applications (Client/Desktop/Workstation base and the Server
"Optional" channels), but typically avoids such, other than those in
Software Collections (SCL).

It'll be interesting to see how this effort develops long-term, but
it's definitely nice to see another free "Enterprise Linux" rebuild.

-- bjs


---------- Forwarded message ----------
From: Richard Brown <***@opensuse.org>
Date: Wed, Nov 4, 2015 at 11:29 AM
Subject: [opensuse-announce] openSUSE Leap 42.1 is RELEASED
To: "opensuse-***@opensuse.org" <opensuse-***@opensuse.org>,
opensuse-project <opensuse-***@opensuse.org>, oS-fctry
<opensuse-***@opensuse.org>


The wait is over and a new era begins for openSUSE releases.
Contributors, friends and fans can now download the first Linux hybrid
distro openSUSE Leap 42.1. Since the last release, exactly one year
ago, openSUSE transformed its development process to create an
entirely new type of hybrid Linux distribution called openSUSE Leap.

Version 42.1 is the first version of openSUSE Leap that uses source
from SUSE Linux Enterprise (SLE) providing a level of stability that
will prove to be unmatched by other Linux distributions. Bonding
community development and enterprise reliability provides more
cohesion for the project and its contributor’s maintenance updates.
openSUSE Leap will benefit from the enterprise maintenance effort and
will have some of the same packages and updates as SLE, which is
different from previous openSUSE versions that created separate
maintenance streams.

Community developers provide an equal level of contribution to Leap
and upstream projects to the release, which bridges a gap between
matured packages and newer packages found in openSUSE’s other
distribution Tumbleweed.

Since the move was such a shift from previous versions, a new version
number and version naming strategy was adapted to reflect the change.
The SLE sources come from SUSE’s soon to be released SLE 12 Service
Pack 1 (SP1). The naming strategy is SLE 12 SP1 or 12.1 + 30 =
openSUSE Leap 42.1. Many have asked why 42, but SUSE and openSUSE have
a tradition of starting big ideas with a four and two, a reference to
The Hitchhiker’s Guide to the Galaxy.

Every minor version of openSUSE Leap users can expect a new KDE and
GNOME, but today is all about openSUSE Leap 42.1, so if you are tired
of a brown desktop, try a green one.

Thank You to everyone who helped make this big Leap a success

Have a lot of fun, and get thinking about how we can make Leap 42.2
even better :)

Regards,

Richard Brown
openSUSE Board Chairman
--
To unsubscribe, e-mail: opensuse-announce+***@opensuse.org
For additional commands, e-mail: opensuse-announce+***@opensuse.org
--
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
Shawn McMahon
2015-11-05 16:16:58 UTC
Permalink
They should start a community site for LEAP contributors and friends. They
could use a domain for it such as "leap-cf.org".
Post by Bryan J Smith
If you haven't heard, the OpenSuSE Project has released it's own
"Enterprise Linux Rebuild" using SuSE Linux Enterprise sources --
OpenSuSE Leap -- essentially SuSE's own "CentOS."
Until the Novell purchase, SuSE did _not_ release Source RPMs for its
SLE Server (SLES) and Desktop (SLED) products. Novell reversed that
shortly after purchase, and even open sourced many tools (short of the
"crown jewels" of Novell itself, not quiet going 100% open source).
But no project bothered to rebuild SLES/SLED from SRPMs. But that now
changes with OpenSuSE Leap.
The project is new, and starts at version 42 (the answer to
everything), but they are going to try to mix in select, different
Upstream packages to keep a few things more current. The Fedora
Project's Extra Packages for Enterprise Linux (EPEL) does similarly,
but only for non-core RHEL add-ons, not the core/base channel. Red
Hat does rebase some select packages regularly, usually desktop
applications (Client/Desktop/Workstation base and the Server
"Optional" channels), but typically avoids such, other than those in
Software Collections (SCL).
It'll be interesting to see how this effort develops long-term, but
it's definitely nice to see another free "Enterprise Linux" rebuild.
-- bjs
---------- Forwarded message ----------
Date: Wed, Nov 4, 2015 at 11:29 AM
Subject: [opensuse-announce] openSUSE Leap 42.1 is RELEASED
The wait is over and a new era begins for openSUSE releases.
Contributors, friends and fans can now download the first Linux hybrid
distro openSUSE Leap 42.1. Since the last release, exactly one year
ago, openSUSE transformed its development process to create an
entirely new type of hybrid Linux distribution called openSUSE Leap.
Version 42.1 is the first version of openSUSE Leap that uses source
from SUSE Linux Enterprise (SLE) providing a level of stability that
will prove to be unmatched by other Linux distributions. Bonding
community development and enterprise reliability provides more
cohesion for the project and its contributor’s maintenance updates.
openSUSE Leap will benefit from the enterprise maintenance effort and
will have some of the same packages and updates as SLE, which is
different from previous openSUSE versions that created separate
maintenance streams.
Community developers provide an equal level of contribution to Leap
and upstream projects to the release, which bridges a gap between
matured packages and newer packages found in openSUSE’s other
distribution Tumbleweed.
Since the move was such a shift from previous versions, a new version
number and version naming strategy was adapted to reflect the change.
The SLE sources come from SUSE’s soon to be released SLE 12 Service
Pack 1 (SP1). The naming strategy is SLE 12 SP1 or 12.1 + 30 =
openSUSE Leap 42.1. Many have asked why 42, but SUSE and openSUSE have
a tradition of starting big ideas with a four and two, a reference to
The Hitchhiker’s Guide to the Galaxy.
Every minor version of openSUSE Leap users can expect a new KDE and
GNOME, but today is all about openSUSE Leap 42.1, so if you are tired
of a brown desktop, try a green one.
Thank You to everyone who helped make this big Leap a success
Have a lot of fun, and get thinking about how we can make Leap 42.2
even better :)
Regards,
Richard Brown
openSUSE Board Chairman
--
--
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
_______________________________________________
Tech mailing list
http://lists.golug.org/listinfo/tech
Bryan J Smith
2015-11-05 16:25:10 UTC
Permalink
Post by Shawn McMahon
They should start a community site for LEAP contributors and friends. They
could use a domain for it such as "leap-cf.org".
Thought you'd guys get a kick out of the name. ;)
--
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
Aaron Morrison
2015-11-05 16:42:59 UTC
Permalink
The Domain name was the first thing I thought of when I read your post. 😄


--am
Post by Bryan J Smith
Post by Shawn McMahon
They should start a community site for LEAP contributors and friends. They
could use a domain for it such as "leap-cf.org".
Thought you'd guys get a kick out of the name. ;)
--
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
_______________________________________________
Tech mailing list
http://lists.golug.org/listinfo/tech
Bryan J Smith
2015-11-05 16:48:49 UTC
Permalink
Post by Aaron Morrison
The Domain name was the first thing I thought of when I read your post. 😄
Indeed. ;)

In all and real seriousness, I thought it was cool we're finally
seeing another Enterprise Linux rebuild. There are a lot of projects
built up around CentOS now, beyond just EPEL (although most leverage
EPEL). SuSE clearly wants to see similar around SLES.

The name was just an added bonus.

-- bjs
Steve Litt
2015-11-05 17:01:50 UTC
Permalink
On Thu, 5 Nov 2015 11:48:49 -0500
Post by Bryan J Smith
In all and real seriousness, I thought it was cool we're finally
seeing another Enterprise Linux rebuild.
What's Enterprise Linux?

SteveT

Steve Litt
November 2015 featured book: Troubleshooting Techniques
of the Successful Technologist
http://www.troubleshooters.com/techniques
Bryan J Smith
2015-11-05 17:45:57 UTC
Permalink
Post by Steve Litt
What's Enterprise Linux?
Enterprise Linux [distribution]:

A Linux distribution that sustains various libraries and software
versions for _years_ after they are _abandoned_ by the Upstream by
backporting bug and security fixes to those older, abandoned versions.
Additionally, they can provide long-term Application Programmer
Interfaces (APIs) and even Application Binary Interfaces (ABIs) that
are mitigated against changes.

Examples of Enterprise Linux:
- RHEL: 10+3 years, Documented kABI and select library ABIs
- SLEx: 9+3 years, Some ABI mitigation, but with more recent rebasing
(including kernel rebasing)
- LTS: 5 years, No documented ABI

Non-examples of Enterprise Linux:
- Fedora: Only 2R+1M (typically 13-18 month) release cycle, regular rebasing
- non-LTS: Only 9 month release cycle, regular rebasing
- Etc...
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
Bryan J Smith
2015-11-05 17:50:18 UTC
Permalink
Post by Bryan J Smith
- RHEL: 10+3 years, Documented kABI and select library ABIs
- SLEx: 9+3 years, Some ABI mitigation, but with more recent rebasing
(including kernel rebasing)
Correction, newer SLES releases are now 10+3 [1], and very much like RHEL. [2]

I.e.,
- Phase 1: years 1-5
- Phase 2: years 6-7 (no more RFEs)
- Phase 3: years 8-10 (no more SP/U)
- Extended: years 11-13 (added entitlement)

-- bjs

[1] https://www.suse.com/support/policy.html
[2] tps://access.redhat.com/support/policy/updates/errata
--
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
Steve Litt
2015-11-05 18:23:47 UTC
Permalink
On Thu, 5 Nov 2015 12:45:57 -0500
Post by Bryan J Smith
Post by Steve Litt
What's Enterprise Linux?
A Linux distribution that sustains various libraries and software
versions for _years_ after they are _abandoned_ by the Upstream by
backporting bug and security fixes to those older, abandoned versions.
Additionally, they can provide long-term Application Programmer
Interfaces (APIs) and even Application Binary Interfaces (ABIs) that
are mitigated against changes.
- RHEL: 10+3 years, Documented kABI and select library ABIs
- SLEx: 9+3 years, Some ABI mitigation, but with more recent rebasing
(including kernel rebasing)
- LTS: 5 years, No documented ABI
- Fedora: Only 2R+1M (typically 13-18 month) release cycle, regular rebasing
- non-LTS: Only 9 month release cycle, regular rebasing
- Etc...
Thanks!

I had always been under the impression that "Enterprise Linux" was just
buzzword for big-iron Linux, and wondered why any particular distro was
necessary for big-iron. Now that I understand what it *really* is, it
makes perfect sense. If I had a couple hundred computers to take care
of, no way would I want to be pressured to upgrade every few months,
especially if I were stuck with an app that worked with earlier Linuxes
but not later ones.

Thanks,

SteveT

Steve Litt
November 2015 featured book: Troubleshooting Techniques
of the Successful Technologist
http://www.troubleshooters.com/techniques
Bryan J Smith
2015-11-05 18:27:26 UTC
Permalink
Post by Steve Litt
Thanks!
I had always been under the impression that "Enterprise Linux" was just
buzzword for big-iron Linux, and wondered why any particular distro was
necessary for big-iron.
That's what most Linux advocates believe. They think it's only for
the warm'n fuzzy business people.

But Red Hat, SuSE, etc... actually put a lot of people on the
Upstream, because they are supporting customers, including backporting
fixes to older versions that the Upstream has long abandoned.

I.e., $$$ for sustaining engineering = same engineers on Upstream too
Post by Steve Litt
Now that I understand what it *really* is, it makes perfect sense. If I had
a couple hundred computers to take care of, no way would I want to be
pressured to upgrade every few months, especially if I were stuck with
an app that worked with earlier Linuxes but not later ones.
Exactly. ;)

Now if I could only get developers to stop targeting Ubuntu (non-LTS),
I'd have it made. Unfortunately, I'm often "cleaning up" that work so
it'll run on Ubuntu LTS, if not RHEL.

-- bjs
Steve Litt
2015-11-05 18:57:28 UTC
Permalink
On Thu, 5 Nov 2015 13:27:26 -0500
Post by Bryan J Smith
Post by Steve Litt
Now that I understand what it *really* is, it makes perfect sense.
If I had a couple hundred computers to take care of, no way would I
want to be pressured to upgrade every few months, especially if I
were stuck with an app that worked with earlier Linuxes but not
later ones.
Exactly. ;)
Now if I could only get developers to stop targeting Ubuntu (non-LTS),
I'd have it made. Unfortunately, I'm often "cleaning up" that work so
it'll run on Ubuntu LTS, if not RHEL.
U mean in-house developers? Upstreams? The people distros call
"developers" that I call packagers?

Just speaking for myself, when I build something (and I will be coming
out with bookwriting software soon), I always try to make it work with
a lowest common denominator Linux and have it depend on as little as
possible, and I don't make it dependent on new versions of anything.
Unless speed is a real issue, I write it in Python to minimize the
possibility of errant pointers and buffer overruns (and overrun
possibilities) in the code I write.

But I know my beliefs on writing programs isn't universal.

SteveT

Steve Litt
November 2015 featured book: Troubleshooting Techniques
of the Successful Technologist
http://www.troubleshooters.com/techniques
Shawn McMahon
2015-11-05 19:16:45 UTC
Permalink
Post by Steve Litt
U mean in-house developers? Upstreams? The people distros call
"developers" that I call packagers?
In the Enterprise distro companies, they really are developers.

Propose a patch to GNU Coreutils and one of the first two people who will
respond to you will be from a redhat.com address. The Enterprise Linux
companies pay a lot of the core developers in Linux and GNU to work full
time on the code.
Bryan J Smith
2015-11-05 19:34:29 UTC
Permalink
Post by Steve Litt
U mean in-house developers? Upstreams? The people distros call
"developers" that I call packagers?
Obviously you don't work in the Upstream much. ;)

I always rather found it humorous when a new distro "maintainer"
("packager" as you call them) would come in and rant about Red Hat,
only to find out they had just insulted the very developers of the
software. They were completely ignorant of the fact, especially if
the Red Hat employee did _not_ use their @redhat.com address.

Red Hat commonly _recommends_ a lot of employees _not_ use their
@redhat.com address, for various, understandable reasons. Only those
"well established" are allowed. Even I did _not_ use my @redhat.com
address on EPEL and other lists, even though they are Red Hat
maintained, and I even had a Red Hat People page too (usually the
litmus test).

In fact, and I need to find it ... but most Red Hat developers use
their Fedora Accounts System (FAS) with an @fedoraproject.org, for
_most_ Upstream work. Again, it's only "well established" people that
use their @redhat.com.

<Cue the study from GDK (of SuSE) about how little Ubuntu does to
develop the desktop, and how much Red Hat and Novell very much did in
the Upstream.>

-- bjs
Steve Litt
2015-11-06 19:07:43 UTC
Permalink
On Thu, 5 Nov 2015 14:34:29 -0500
Post by Bryan J Smith
Post by Steve Litt
U mean in-house developers? Upstreams? The people distros call
"developers" that I call packagers?
Obviously you don't work in the Upstream much. ;)
Well, except for VimOutliner, which is a package in most major distros,
you're right.

But I still want to know. Which kind of programmers would you like to
persuade to stop targeting Ubuntu (non-LTS)?

SteveT

Steve Litt
November 2015 featured book: Troubleshooting Techniques
of the Successful Technologist
http://www.troubleshooters.com/techniques
Bryan J Smith
2015-11-05 19:43:51 UTC
Permalink
Regarding ...
Post by Steve Litt
Just speaking for myself, when I build something (and I will be coming
out with bookwriting software soon), I always try to make it work with
a lowest common denominator Linux and have it depend on as little as
possible, and I don't make it dependent on new versions of anything.
Unless speed is a real issue, I write it in Python to minimize the
possibility of errant pointers and buffer overruns (and overrun
possibilities) in the code I write.
Umm ... Python, Java and other p-type software is different than
system-level, or even back-end desktop. That was the promise of
"write once, run everywhere." But even things like Python and Java
_do_ run into versioning differences, let alone library ones.

When you're doing C, and trying to maintain API -- let alone ABI
(binary) -- compatibility for 10+ years -- in a single distro release
-- it's a whole different ballgame.
Post by Steve Litt
But I know my beliefs on writing programs isn't universal.
When you're on the order of Drepper, and have to put up with the
constant, condescending BS he did ... for _decades_ ... then you'll
know how it really feels to maintain API to a whole new level.

Writing to a p-type, portable language with its own byte code run-time
is not the same.

-- bjs
Bryan J Smith
2015-11-05 23:40:16 UTC
Permalink
Post by Steve Litt
U mean in-house developers?
Just to backtrack, as I went off on a couple of tangents ...

Yes, in-house developers at clients. They develop on Ubuntu (non-LTS)
and then say, "Oh, just call Canonical for support." And when I
explain to them that Canonical doesn't offer support on non-LTS, they
don't believe me. I usually have to enlist one of my colleagues from
Canonical at that point. Yes, Canonical *will* sell them
"professional services." But even they will tell them to develop for
LTS, or at least for the next release of LTS.

That's probably my #1 complaint right now that, sadly, even non-tech
people "get" better than a lot of Ubuntu fanboys. They *do* see the
difference between Ubuntu and Canonical Advantage / Ubuntu LTS, like
Fedora and RHEL / CentOS. They can see past the trademarks and see
the life cycle and support, ironically enough.

In the end, I'm often helping them port to RHEL for a reason. Because
they already didn't target Ubuntu LTS, but an off-year Ubuntu. So if
we're going to port, they are going to just port to RHEL, where they
already have a SLA. Canonical runs into the same issues with Red Hat,
that Red Hat often runs into with Microsoft or Solaris. The ELA/SLA
is already signed, so there's no sense in bring in yet another.

At most they try Ubuntu LTS to evaluate, but Canonical doesn't see a
dime. Because they already have a SLA with Red Hat. Not fair, but
just how it is. But for them to develop for non-LTS, that's on them.
I cannot help them with their ignorance, especially when they are so
difficult. As I said, to *help* Canonical, I often bring in my good
colleagues *from* Canonical to "set them straight."

Just like I will my colleagues from CentOS when a client's tech or --
more often yet -- contractor mouths something off about CentOS v.
RHEL. Nothing like a CentOS developer to set them straight on "SLAs."
;)

Which brings me back to ...
[ the point Shawn made, and I piggybacked on]
Post by Steve Litt
Upstreams? The people distros call "developers" that I call packagers?
Probably Canonical's biggest issues are rabid Ubuntu fans. They will
not only shouve non-LTS into areas they don't belong, but Creator help
them if Red Hat is on-site, and solves an Ubuntu issue. Doesn't work
out very well, especially when their only argument is, "but it's
free."

I.e., it doesn't take a tech to understand if the guy that _wrote_ --
as in _developer_ -- the software, doesn't work for Canonical, but for
the company you're already paying money to, you're likely going to
look at Canonical as "less value."

It's the #1 reason why I've seen Ubuntu _banned_ in companies. And
the sad thing is, it's not remotely Canonical's fault. I really
*hate* to see it happen too. But I have, more than once.
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
Steve Litt
2015-11-06 19:11:20 UTC
Permalink
On Thu, 5 Nov 2015 18:40:16 -0500
Post by Bryan J Smith
Post by Steve Litt
U mean in-house developers?
Just to backtrack, as I went off on a couple of tangents ...
Yes, in-house developers at clients. They develop on Ubuntu (non-LTS)
and then say, "Oh, just call Canonical for support."
Thanks. Please ignore my second asking of that question, as this
answers it.

SteveT

Steve Litt
November 2015 featured book: Troubleshooting Techniques
of the Successful Technologist
http://www.troubleshooters.com/techniques
Steve Litt
2015-11-06 23:21:14 UTC
Permalink
On Thu, 5 Nov 2015 18:40:16 -0500
Post by Bryan J Smith
Post by Steve Litt
U mean in-house developers?
Just to backtrack, as I went off on a couple of tangents ...
Yes, in-house developers at clients. They develop on Ubuntu (non-LTS)
If my company is running Ubuntu 14.10 or 15.04, I the software I create
to run on those OS versions. I see your point about making a 14.04LTS
VM, testing it there, and if there are significant differences, write
the software so it works there.

I'm not quite sure why anybody would use system dependencies if they're
not creating system software. Why would one do that? To gain a 5% speed
increase? Just make your algorithm 5% better.
Post by Bryan J Smith
and then say, "Oh, just call Canonical for support."
:-) I guess I came from a PC contract programming background, not
Mainframe or Mini. None of the PC programmers I knew would have *ever*
depended on vendor support for the OS --- just for the language/dev
environment.

Obviously, if I were writing a driver I might need help from the OS
vendor.

SteveT

Steve Litt
November 2015 featured book: Troubleshooting Techniques
of the Successful Technologist
http://www.troubleshooters.com/techniques
Bryan J Smith
2015-11-06 23:27:44 UTC
Permalink
Post by Steve Litt
If my company is running Ubuntu 14.10 or 15.04, I the software I create
to run on those OS versions.
So what happens by July 2015 or January 2016, respectively?
How do you sustain the software?

You modify it, correct?
And you don't do that for free ... do you?
Post by Steve Litt
I see your point about making a 14.04LTS VM, testing it there,
and if there are significant differences, write the software so it
works there.
Writing software for LTS 14.04, or looking at current 16.04
developments so you're ready for that next release.
Post by Steve Litt
I'm not quite sure why anybody would use system dependencies if they're
not creating system software. Why would one do that? To gain a 5% speed
increase? Just make your algorithm 5% better.
A lot of the world still relies on system libraries, especially those
core C libraries and other things. Not everyone runs p-type
interpreted or byte code JIT compiled.

And even when they do, there are still web and other integration issues.

Are you going to upgrade non-LTS every 6 months? Maybe, if you like
getting paid by the hour. But the clients won't ... they definitely
won't ... definitely at the higher levels.
Post by Steve Litt
:-) I guess I came from a PC contract programming background, not
Mainframe or Mini. None of the PC programmers I knew would have *ever*
depended on vendor support for the OS --- just for the language/dev
environment.
And yet ... the language/dev environment is still tied to the system. ;)
Post by Steve Litt
Obviously, if I were writing a driver I might need help from the OS
vendor.
Apparently you don't worry about libC, LibStdC++, etc..., and don't
need various libraries.

I mean ... why don't you just install an .so (like a DLL) in the
program directory, right? Can you think of a reason you might not
want to do that? Not just for your customer/client ... but for your
own workload?

Welcome to my world. ;)

-- bjs
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
Steve Litt
2015-11-07 01:08:04 UTC
Permalink
On Fri, 6 Nov 2015 18:27:44 -0500
Post by Bryan J Smith
Post by Steve Litt
If my company is running Ubuntu 14.10 or 15.04, I the software I
create to run on those OS versions.
So what happens by July 2015 or January 2016, respectively?
How do you sustain the software?
You modify it, correct?
Probably not. I'm very careful about what dependencies I introduce, so
most of the time it works on any darn distro.

BUT...

To the extent that I would put in libraries, yes, you're right, I'd be
better off developing on the software on the LTS so that I'm not
tempted to use brand new features that wouldn't be portable. I got
furious at LyX for doing this once, so I wouldn't want to do it. Thanks
for pointing this out. I'd never given it much thought before.
Post by Bryan J Smith
And you don't do that for free ... do you?
Absolutely I'll do it for free. Just as soon as my landlord stops
charging me rent, the doctor doesn't bill me for his services, the car
dealership gives me a car whenever the old one dies, and FSU and
Seminole State refund me all the money I spent on my kids' educations.
Post by Bryan J Smith
Post by Steve Litt
I see your point about making a 14.04LTS VM, testing it there,
and if there are significant differences, write the software so it
works there.
Writing software for LTS 14.04, or looking at current 16.04
developments so you're ready for that next release.
Post by Steve Litt
I'm not quite sure why anybody would use system dependencies if
they're not creating system software. Why would one do that? To
gain a 5% speed increase? Just make your algorithm 5% better.
A lot of the world still relies on system libraries, especially those
core C libraries and other things. Not everyone runs p-type
interpreted or byte code JIT compiled.
How complex do you think my programs are? :-)

printf(), malloc(), free(), and the getopt_long functions have worked
identically on every Linux computer I've used. If that weren't the
case, I wouldn't be using C. I'll tell you something else. I don't use
a library unless the what the library provides is mostly what I need.
Unless a Venn diagram between the library's provisions and my needs
show a lot more intersection than xor. I don't deliberately reinvent
the wheel, but I'll reinvent the spoke every time in preference to
modifying somebody else's wheel in order to obtain a spoke.
Post by Bryan J Smith
Apparently you don't worry about libC, LibStdC++, etc..., and don't
need various libraries.
Correct. I'm not writing TCP stacks or device drivers.

I do see one area of concern: I sometimes grab info from somebody
else's command (perhaps an ip command) from a C or Python program or a
shellscript. If somebody else's command's input or output changes
between OS versions, my stuff breaks. I haven't seen that happen too
often.

SteveT

Steve Litt
November 2015 featured book: Troubleshooting Techniques
of the Successful Technologist
http://www.troubleshooters.com/techniques
Bryan J Smith
2015-11-07 03:17:27 UTC
Permalink
Post by Steve Litt
Absolutely I'll do it for free. Just as soon as my landlord stops
charging me rent, the doctor doesn't bill me for his services, the car
dealership gives me a car whenever the old one dies, and FSU and
Seminole State refund me all the money I spent on my kids' educations.
Hence why ... corporations run Enterprise Linux distros. ;)
Mitigated API, if not mitated ABI, changes means no (or fewer) updates.
Post by Steve Litt
How complex do you think my programs are? :-)
If they use a Python system call that requires a C platform library
... it doesn't matter. Things get deprecated Upstream, and sometimes
that means what worked here, doesn't work there.

This is my biggest problem with most developers. They don't see the
sustainment issues. I've been on both sides. :(
Post by Steve Litt
printf(), malloc(), free(), and the getopt_long functions have worked
identically on every Linux computer I've used. If that weren't the
case, I wouldn't be using C. I'll tell you something else. I don't use
a library unless the what the library provides is mostly what I need.
Unless a Venn diagram between the library's provisions and my needs
show a lot more intersection than xor. I don't deliberately reinvent
the wheel, but I'll reinvent the spoke every time in preference to
modifying somebody else's wheel in order to obtain a spoke.
Correct. I'm not writing TCP stacks or device drivers.
I do see one area of concern: I sometimes grab info from somebody
else's command (perhaps an ip command) from a C or Python program
or a shellscript. If somebody else's command's input or output changes
between OS versions, my stuff breaks. I haven't seen that happen too
often.
API. ;)

Libraries change.

-- bjs
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
Shawn McMahon
2015-11-07 04:15:53 UTC
Permalink
Post by Bryan J Smith
If they use a Python system call that requires a C platform library
... it doesn't matter. Things get deprecated Upstream, and sometimes
that means what worked here, doesn't work there.
I don't recall the details, because it's been years, but once upon a time a
Korn Shell bug was fixed and broke dozens of our scripts that relied on the
behavior. It cost us millions by the time it was done.

Now multiply that by a hundred to see what it would be like if we didn't
use an Enterprise distro.
Barry Fishman
2015-11-07 05:00:58 UTC
Permalink
Post by Bryan J Smith
This is my biggest problem with most developers. They don't see the
sustainment issues. I've been on both sides. :(
Post by Steve Litt
printf(), malloc(), free(), and the getopt_long functions have worked
identically on every Linux computer I've used. If that weren't the
case, I wouldn't be using C. I'll tell you something else. I don't use
a library unless the what the library provides is mostly what I need.
Unless a Venn diagram between the library's provisions and my needs
show a lot more intersection than xor. I don't deliberately reinvent
the wheel, but I'll reinvent the spoke every time in preference to
modifying somebody else's wheel in order to obtain a spoke.
Correct. I'm not writing TCP stacks or device drivers.
I do see one area of concern: I sometimes grab info from somebody
else's command (perhaps an ip command) from a C or Python program
or a shellscript. If somebody else's command's input or output changes
between OS versions, my stuff breaks. I haven't seen that happen too
often.
API. ;)
Libraries change.
As an experiment I tried building some C software I wrote and haven't
changed since the mid 1980's. The compiler printed out lots of
warnings, primarily do to changes to the C language spec and default
header files, but the executables seemed to work just fine.

But I always wrote code based on the what the language and Unix
standards said rather that what the local system's OS allowed.

On the other hand, much of the BSD utilities were hard to fix when the
POSIX standards became dominant, because developers at UC Berkeley were
so enamored with their own convoluted API's that they used them even
when they were not needed, and actually made their code less readable
and maintainable. I know I ended up dropping useful packages like SPMS,
(written at HP Palo Alto) because the code was more of an effort to
port to Solaris or Linux than I was willing to undertake.

--
Barry Fishman
Bryan J Smith
2015-11-07 05:11:03 UTC
Permalink
Post by Bryan J Smith
API. ;)
Libraries change.
As an experiment I tried building some C software I wrote and haven't
changed since the mid 1980's. The compiler printed out lots of
warnings, primarily do to changes to the C language spec and default
header files, but the executables seemed to work just fine.
But I always wrote code based on the what the language and Unix
standards said rather that what the local system's OS allowed.
Do I hear an echo?

What kind of libraries are you guys using? Or are you really not
using any headers than some basic stuff?

I mean ... even they go ... "See! See! I can built the software!"

And then I go ... "Oh look, what are all those unresolved symbols when
you link?"

Some of us work in the "real world" where we deal with GUIs,
Middleware, SQL Services, etc...

But you guys keep making these arguments.

-- bjs
Steve Litt
2015-11-07 05:43:23 UTC
Permalink
On Sat, 7 Nov 2015 00:11:03 -0500
Post by Bryan J Smith
Post by Bryan J Smith
API. ;)
Libraries change.
As an experiment I tried building some C software I wrote and
haven't changed since the mid 1980's. The compiler printed out
lots of warnings, primarily do to changes to the C language spec
and default header files, but the executables seemed to work just
fine. But I always wrote code based on the what the language and
Unix standards said rather that what the local system's OS allowed.
Do I hear an echo?
What kind of libraries are you guys using? Or are you really not
using any headers than some basic stuff?
That's correct. 99% of the time, I use only basic headers. I'm an
application programmer, not a systems programmer. I don't write device
drivers, and I try very hard to write GUI programs only when necessary,
and those times I try to have the GUI do the very least work it can do.
Post by Bryan J Smith
I mean ... even they go ... "See! See! I can built the software!"
Yes, because he coded to the language and the Unix standards.
Post by Bryan J Smith
And then I go ... "Oh look, what are all those unresolved symbols when
you link?"
Well, in all fairness, his code *was* from the mid-1980's. I mean,
prototypes were new back then.
Post by Bryan J Smith
Some of us work in the "real world" where we deal with GUIs,
Middleware, SQL Services, etc...
But you guys keep making these arguments.
And the one thing we have in common is that when we write code, we're
careful about dependencies, and we write to standards.

Listen, nobody's arguing that it's not an excellent idea to code to an
LTS. It's an excellent idea. I'm going to do it from now on. In these
days of VM's it's trivial to do so.

And I agree that if I had a whole bunch of servers to maintain, I'd use
an LTS version. Nobody needs the aggravation of having stuff bust every
6 months.

All I'm saying is there are ways to code your applications so they'll
probably work on every modern version of every major distro, and future
distros for a long future to come. There are a lot of other benefits of
coding that way, but, well, it's almost 1am.

SteveT

Steve Litt
November 2015 featured book: Troubleshooting Techniques
of the Successful Technologist
http://www.troubleshooters.com/techniques
Bryan J Smith
2015-11-07 06:01:01 UTC
Permalink
Post by Steve Litt
That's correct. 99% of the time, I use only basic headers. I'm an
application programmer, not a systems programmer. I don't write device
drivers, and I try very hard to write GUI programs only when necessary,
and those times I try to have the GUI do the very least work it can do.
Why do you think only device drivers use system libraries?

Or are you shipping unmaintained libraries, whether .so files in the
application directory, or statically linked lib.a files in the binary,
so they are yet another attack vector? That's the #1 problem with
Windows right now, especially ISVs that ship .dlls in their
application directory, or statically linked libraries, including open
source.

Enterprise distros maintain _all_ those libraries, an extensive set
used by _applications_, so they work for 10+ years.
Post by Steve Litt
Yes, because he coded to the language and the Unix standards.
Oh boy.

I guess you're just another guy I have to show this to first hand.
Because you're saying this, then looking like a deer in headlights
when the software won't run -- or even build -- on a different release
version.
Post by Steve Litt
And the one thing we have in common is that when we write code, we're
careful about dependencies, and we write to standards.
Libraries have _nothing_ to do with standards.
Post by Steve Litt
Listen, nobody's arguing that it's not an excellent idea to code to an
LTS. It's an excellent idea. I'm going to do it from now on. In these
days of VM's it's trivial to do so.
But ... the old, out-of-date releases have security exploits. Running
them in a VM don't solve the problem. ;)
Post by Steve Litt
And I agree that if I had a whole bunch of servers to maintain, I'd use
an LTS version. Nobody needs the aggravation of having stuff bust every
6 months.
All I'm saying is there are ways to code your applications so they'll
probably work on every modern version of every major distro, and future
distros for a long future to come. There are a lot of other benefits of
coding that way, but, well, it's almost 1am.
I guess I'll just have to drop this. You're just another guy that
says this say stuff. Then he tries to run it on a different Ubuntu
release and goes, "Oh, I wrote this for 15.04, but you need to pay me
to modify it for LTS 14.04. But why don't you just run 15.04
instead?"

<facepalm>

-- bjs
Steve Litt
2015-11-07 13:54:36 UTC
Permalink
On Sat, 7 Nov 2015 01:01:01 -0500
Post by Bryan J Smith
Post by Steve Litt
That's correct. 99% of the time, I use only basic headers. I'm an
application programmer, not a systems programmer. I don't write
device drivers, and I try very hard to write GUI programs only when
necessary, and those times I try to have the GUI do the very least
work it can do.
Why do you think only device drivers use system libraries?
Or are you shipping unmaintained libraries, whether .so files in the
application directory, or statically linked lib.a files in the binary,
so they are yet another attack vector?
Oh, I see the issue now. No way on earth would I ever attempt to
deliver any compiled code. I'd give them the source code and a make
file, let them compile it. Or, if I'm their employee or contractor,
I'd do that compilation. If it needs ./configure or the equivalent, I
feel like I've depended on way too much.

Now of course, if they have 500 machines, I'm not going to compile 500
times. Hopefully they have one or two distro versions: I'd compile
against those and copy my executable. Not libraries, my executable.
Like I said, I'm an application's programmer.

[snip]
Post by Bryan J Smith
I guess I'll just have to drop this. You're just another guy that
says this say stuff. Then he tries to run it on a different Ubuntu
release and goes, "Oh, I wrote this for 15.04, but you need to pay me
to modify it for LTS 14.04. But why don't you just run 15.04
instead?"
I haven't had this problem. Most of my C created executables run OS
version after OS version, distro after distro. I have one executable
that internally watermarks eBooks that needs to get recompiled every
time the computer changes. So I compile it. 20 minutes to remember what
program it is, 5 seconds to recompile, 1 minutes to test.

Now I'm on a rolling release, so let's see how *that* influences this
whole topic :-).

But yeah, serious business, before releasing a stable version of any
free software I write, I'll make sure it works on Ubuntu 14.04 as well
as Void, and I thank you for that idea.
Post by Bryan J Smith
<facepalm>
I've heard palm oil is very good for your complexion. :-)

SteveT

Steve Litt
November 2015 featured book: Troubleshooting Techniques
of the Successful Technologist
http://www.troubleshooters.com/techniques
Shawn McMahon
2015-11-07 15:53:04 UTC
Permalink
Post by Steve Litt
On Sat, 7 Nov 2015 01:01:01 -0500
Post by Bryan J Smith
Post by Steve Litt
That's correct. 99% of the time, I use only basic headers. I'm an
application programmer, not a systems programmer. I don't write
device drivers, and I try very hard to write GUI programs only when
necessary, and those times I try to have the GUI do the very least
work it can do.
Why do you think only device drivers use system libraries?
Or are you shipping unmaintained libraries, whether .so files in the
application directory, or statically linked lib.a files in the binary,
so they are yet another attack vector?
Oh, I see the issue now. No way on earth would I ever attempt to
deliver any compiled code. I'd give them the source code and a make
file, let them compile it. Or, if I'm their employee or contractor,
I'd do that compilation. If it needs ./configure or the equivalent, I
feel like I've depended on way too much.
Are you saying that you never, ever, use ANY system libraries? That 100% of
the time you reinvent your own wheels? You mentioned sometimes you write
GUI programs; I find it very hard to believe you're not linking in any
shared objects in those.
Steve Litt
2015-11-07 18:14:10 UTC
Permalink
On Sat, 7 Nov 2015 10:53:04 -0500
Post by Shawn McMahon
Post by Steve Litt
On Sat, 7 Nov 2015 01:01:01 -0500
Post by Bryan J Smith
Post by Steve Litt
That's correct. 99% of the time, I use only basic headers. I'm
an application programmer, not a systems programmer. I don't
write device drivers, and I try very hard to write GUI programs
only when necessary, and those times I try to have the GUI do
the very least work it can do.
Why do you think only device drivers use system libraries?
Or are you shipping unmaintained libraries, whether .so files in
the application directory, or statically linked lib.a files in
the binary, so they are yet another attack vector?
Oh, I see the issue now. No way on earth would I ever attempt to
deliver any compiled code. I'd give them the source code and a make
file, let them compile it. Or, if I'm their employee or contractor,
I'd do that compilation. If it needs ./configure or the equivalent,
I feel like I've depended on way too much.
Are you saying that you never, ever, use ANY system libraries? That
100% of the time you reinvent your own wheels?
That's not what the preceding paragraph says. It just says I'm not
*shipping* (or delivering) any libraries, just executable code.
Post by Shawn McMahon
You mentioned
sometimes you write GUI programs; I find it very hard to believe
you're not linking in any shared objects in those.
Of course I use libraries to do GUI. But I don't create GUI libraries
that other people have to use, which was the point of the paragraph I
was responding to.

SteveT

Steve Litt
November 2015 featured book: Troubleshooting Techniques
of the Successful Technologist
http://www.troubleshooters.com/techniques
Bryan J Smith
2015-11-07 21:02:26 UTC
Permalink
The most I will concese is that sockets and pipes help. But even most
calls these days are regularly changing RPCs, or REST. And then there's
message passing, which goes way back to even the original COM v. CORBA
debate.

Ironically everyone who said we didn't need something CORBA and it's
security have failed to admit we're now struggling with this, because we
have a COM-like model. And that's the main argument some are making now.

Sigh ... good designs are often passed up, because people argue they are
not needed.

-- bjs

DISCLAIMER: Sent from phone, please excuse any typos
--
Bryan J Smith - Technology Mercenary
Post by Steve Litt
On Sat, 7 Nov 2015 10:53:04 -0500
Post by Shawn McMahon
Post by Steve Litt
On Sat, 7 Nov 2015 01:01:01 -0500
Post by Bryan J Smith
Post by Steve Litt
That's correct. 99% of the time, I use only basic headers. I'm
an application programmer, not a systems programmer. I don't
write device drivers, and I try very hard to write GUI programs
only when necessary, and those times I try to have the GUI do
the very least work it can do.
Why do you think only device drivers use system libraries?
Or are you shipping unmaintained libraries, whether .so files in
the application directory, or statically linked lib.a files in
the binary, so they are yet another attack vector?
Oh, I see the issue now. No way on earth would I ever attempt to
deliver any compiled code. I'd give them the source code and a make
file, let them compile it. Or, if I'm their employee or contractor,
I'd do that compilation. If it needs ./configure or the equivalent,
I feel like I've depended on way too much.
Are you saying that you never, ever, use ANY system libraries? That
100% of the time you reinvent your own wheels?
That's not what the preceding paragraph says. It just says I'm not
*shipping* (or delivering) any libraries, just executable code.
Post by Shawn McMahon
You mentioned
sometimes you write GUI programs; I find it very hard to believe
you're not linking in any shared objects in those.
Of course I use libraries to do GUI. But I don't create GUI libraries
that other people have to use, which was the point of the paragraph I
was responding to.
SteveT
Steve Litt
November 2015 featured book: Troubleshooting Techniques
of the Successful Technologist
http://www.troubleshooters.com/techniques
_______________________________________________
Tech mailing list
http://lists.golug.org/listinfo/tech
Barry Fishman
2015-11-07 17:31:42 UTC
Permalink
Post by Bryan J Smith
Post by Bryan J Smith
API. ;)
Libraries change.
As an experiment I tried building some C software I wrote and haven't
changed since the mid 1980's. The compiler printed out lots of
warnings, primarily do to changes to the C language spec and default
header files, but the executables seemed to work just fine.
But I always wrote code based on the what the language and Unix
standards said rather that what the local system's OS allowed.
Do I hear an echo?
What kind of libraries are you guys using? Or are you really not
using any headers than some basic stuff?
I mean ... even they go ... "See! See! I can built the software!"
And then I go ... "Oh look, what are all those unresolved symbols when
you link?"
The warning are not unresolved symbols (which would cause the compile to
fail, but reliance of C's default typing which gcc warns about, but are
now explicitly defines in header files that didn't exist when the
programs were written. I suspected that the now obsolete <strings.h>
might have existed at the time on BSD systems, but not on all the
platforms on which I ran it, including VMS and about 7 flavors of Unix,
and it wasn't part of standard C.

I would have fixed the code if I still used it, but instead re-implement
it for practice when learning a new computer language. I currently have
newer versions of it in Python and a variety of Common Lisp and Scheme
implementations. I currently use a Haskell version, but my dynamically
linked C code, dated October 2000 still sits in my ~/bin directory and
runs just fine with the latest generation Debian Jessie, Ubuntu Wily,
Fedora 23, and Arch distributions, though requires the 32 bit multi-arch
support libraries on my 64 bit machines. The "file" command replies
[reformatted]:

bcal: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux
2.0.0, stripped

GNU/Linux is very upward compatible.

Yes they just use basic stuff, but basic level C is still very powerful,
and this is Unix where most complex API's can be separated out into
independent programs that you talk to via pipes or even sockets. Often,
I keep around alternate test versions of functions that just pipe to
"wget", or "ncat" when I have code that uses lower level interfaces.
Its common that the wget versions seem to run just as fast as those that
use a 3rd party library or even when I work my way through http
protocols and redirects myself.

<rant>

I spent much of my career working on designing and building large scale
systems for the government (so I know what LTS means). In those days,
before Bill Gates, even 3-d party software vendors were expected to
supply source code even for proprietary software, so had more concern
about its quality. [That's why we have copyrights and patents. To
protect disclosed information. There is no public interest in
protecting undisclosed information.] At least on the projects I worked
on, we were also expected to run within our budget, and did.

I leaned to follow what the standards documents said (and hold our
subcontractors to that), and only use platform specific APIs where they
were required, and would isolate the code in one place rather than
distributing it all over the place with #ifdefs.

It may be heresy to say that the current "Agile" programming approach is
junk, and leads to fragile code which is a maintenance nightmare.

Of course most users are becoming accepting of an environment where code
is released by major vendors still broken, and patched after
distribution to a large number of users, when it is to late fix the real
design issues. (After all they were raised on Microsoft and Apple
code.) Where software (and web sites) may look very pretty but are
difficult to use, and are filled with bugs that criminals (and other
businesses and governments) can use to exploit systems. [Why do we
still have buffer overflow and SQL injection bugs, after the solutions
have been worked out long ago?]

Even the current rediscovery of responsive/adaptive design, seems to
rely on predefined screen sizes, which never seems to include my
monitor.

</rant>
--
Barry Fishman
Bryan J Smith
2015-11-07 17:45:37 UTC
Permalink
Post by Barry Fishman
The warning are not unresolved symbols (which would cause the compile to
fail, but reliance of C's default typing which gcc warns about, but are
now explicitly defines in header files that didn't exist when the
programs were written.
Of course not, because you're "compiling" and not "linking." Anyone
can create object code. Object code almost always builds, unless
there is a portability issue.

The _problem_ -- and this is to the original point of Steve _asking_
(I did _not_ start this) "Enterprise Linux?" -- is when you actually
go to _link_ and all those platform-dependencies, everything from
loader to call, that hit ... and hit hard.

I wouldn't be bringing this up if I didn't live it month in, month out
... and definitely weekly at some customers. "We need you to figure
out what's wrong with RHEL and why this software doesn't work." Then
I discover it doesn't run on Ubuntu LTS either. ;)
Post by Barry Fishman
I suspected that the now obsolete <strings.h>
might have existed at the time on BSD systems, but not on all the
platforms on which I ran it, including VMS and about 7 flavors of Unix,
and it wasn't part of standard C.
I would have fixed the code if I still used it, but instead re-implement
it for practice when learning a new computer language. I currently have
newer versions of it in Python and a variety of Common Lisp and Scheme
implementations. I currently use a Haskell version, but my dynamically
linked C code, dated October 2000 still sits in my ~/bin directory and
runs just fine with the latest generation Debian Jessie, Ubuntu Wily,
Fedora 23, and Arch distributions, though requires the 32 bit multi-arch
support libraries on my 64 bit machines. The "file" command replies
bcal: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux
2.0.0, stripped
GNU/Linux is very upward compatible.
And what does ldd give you?

The more linked libraries, the more issues with linking to system libraries.

And the more statically linked, or directory-included, libraries, the
more attack vectors for unmaintained libaries.

The latter is what gets most Windows systems into trouble. ;)
Post by Barry Fishman
Yes they just use basic stuff, but basic level C is still very powerful,
and this is Unix where most complex API's can be separated out into
independent programs that you talk to via pipes or even sockets.
Understood. And in this regard, I will _grant_ you that pipes and
sockets _do_ improve things. However, most coders don't do that, and
even then, some things can_not_.
Post by Barry Fishman
Often, I keep around alternate test versions of functions that just pipe to
"wget", or "ncat" when I have code that uses lower level interfaces.
Its common that the wget versions seem to run just as fast as those that
use a 3rd party library or even when I work my way through http
protocols and redirects myself.
<rant>
I spent much of my career working on designing and building large scale
systems for the government (so I know what LTS means). In those days,
before Bill Gates, even 3-d party software vendors were expected to
supply source code even for proprietary software, so had more concern
about its quality. [That's why we have copyrights and patents. To
protect disclosed information. There is no public interest in
protecting undisclosed information.] At least on the projects I worked
on, we were also expected to run within our budget, and did.
I leaned to follow what the standards documents said (and hold our
subcontractors to that), and only use platform specific APIs where they
were required, and would isolate the code in one place rather than
distributing it all over the place with #ifdefs.
And that's very good. I'm just saying it's not everything.
Especially not today with all sorts of bindings to libraries,
services, etc...
Post by Barry Fishman
It may be heresy to say that the current "Agile" programming approach is
junk, and leads to fragile code which is a maintenance nightmare.
No argument there.
Post by Barry Fishman
Of course most users are becoming accepting of an environment where code
is released by major vendors still broken, and patched after
distribution to a large number of users, when it is to late fix the real
design issues. (After all they were raised on Microsoft and Apple
code.) Where software (and web sites) may look very pretty but are
difficult to use, and are filled with bugs that criminals (and other
businesses and governments) can use to exploit systems. [Why do we
still have buffer overflow and SQL injection bugs, after the solutions
have been worked out long ago?]
Even the current rediscovery of responsive/adaptive design, seems to
rely on predefined screen sizes, which never seems to include my
monitor.
--
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
Aaron Morrison
2015-11-05 17:24:25 UTC
Permalink
Agreed.

If I get time, I'll have to play with it. We're a red hat shop at work, but they are kinda playing games with licensing ( we are an embedded partner, not a reseller). Might be nice to have an alternative.

--am
Post by Bryan J Smith
Post by Aaron Morrison
The Domain name was the first thing I thought of when I read your post. 😄
Indeed. ;)
In all and real seriousness, I thought it was cool we're finally
seeing another Enterprise Linux rebuild. There are a lot of projects
built up around CentOS now, beyond just EPEL (although most leverage
EPEL). SuSE clearly wants to see similar around SLES.
The name was just an added bonus.
-- bjs
_______________________________________________
Tech mailing list
http://lists.golug.org/listinfo/tech
Bryan J Smith
2015-11-05 17:46:55 UTC
Permalink
Red Hat Salespeople vary. They are the one group that has a high
turnover and are more "traditional."

But they keep the GPL developer salaries paid. The alternative is to
always operate in the red (*cough*SuSE*cough*Canonical*).

-- bjs
Post by Aaron Morrison
Agreed.
If I get time, I'll have to play with it. We're a red hat shop at work, but they are kinda playing games with licensing ( we are an embedded partner, not a reseller). Might be nice to have an alternative.
--am
Post by Bryan J Smith
Post by Aaron Morrison
The Domain name was the first thing I thought of when I read your post. 😄
Indeed. ;)
In all and real seriousness, I thought it was cool we're finally
seeing another Enterprise Linux rebuild. There are a lot of projects
built up around CentOS now, beyond just EPEL (although most leverage
EPEL). SuSE clearly wants to see similar around SLES.
The name was just an added bonus.
-- bjs
_______________________________________________
Tech mailing list
http://lists.golug.org/listinfo/tech
_______________________________________________
Tech mailing list
http://lists.golug.org/listinfo/tech
--
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
Aaron Morrison
2015-11-05 18:55:36 UTC
Permalink
We've been pretty fortunate to have the same rep for a while now. But even then he needs "reminding" once in awhile.

I have no qualms with red hats model. But options do make for good leverage 😏.

--am
Post by Bryan J Smith
Red Hat Salespeople vary. They are the one group that has a high
turnover and are more "traditional."
But they keep the GPL developer salaries paid. The alternative is to
always operate in the red (*cough*SuSE*cough*Canonical*).
-- bjs
Post by Aaron Morrison
Agreed.
If I get time, I'll have to play with it. We're a red hat shop at work, but they are kinda playing games with licensing ( we are an embedded partner, not a reseller). Might be nice to have an alternative.
--am
Post by Bryan J Smith
Post by Aaron Morrison
The Domain name was the first thing I thought of when I read your post. 😄
Indeed. ;)
In all and real seriousness, I thought it was cool we're finally
seeing another Enterprise Linux rebuild. There are a lot of projects
built up around CentOS now, beyond just EPEL (although most leverage
EPEL). SuSE clearly wants to see similar around SLES.
The name was just an added bonus.
-- bjs
_______________________________________________
Tech mailing list
http://lists.golug.org/listinfo/tech
_______________________________________________
Tech mailing list
http://lists.golug.org/listinfo/tech
--
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
_______________________________________________
Tech mailing list
http://lists.golug.org/listinfo/tech
Bryan J Smith
2015-11-05 19:26:29 UTC
Permalink
Post by Aaron Morrison
We've been pretty fortunate to have the same rep for a while now.
But even then he needs "reminding" once in awhile.
Probably some sales manager that is on him about expanding revenue.

Understand things changed in 2011-2012 that ensures _only_ sales can
actually identify new revenue, instead of relying on non-sales and
those who work with the customer. It's not just lack of incentive,
but active prevention, which is a long (and self-defeating) story.
Hence why sales is often thumping on customers for revenue expansion,
because they don't have as many people involved in what they do to
discover where they do. I actually feel sorry for sales in this
regard, but it's what happens when new management prevents people from
helping. :(

I.e., not my bi--h and I've long given up on trying to change it, as
it did _no_ favors for my future (no good deed goes unpunished).
Post by Aaron Morrison
I have no qualms with red hats model. But options do make for good leverage 😏.
Exploit it all you can. Maybe things will change. Until then, know
where it's coming from.

That said, Red Hat has the one of the highest engineering/technologist
retention rates for good reason.

-- bjs
Rick Moen
2015-11-07 08:53:37 UTC
Permalink
Post by Bryan J Smith
Red Hat Salespeople vary.
^^^^

Perhaps from season to season?

I must confess, I'm truly charmed by the notion of a Red Hat salesdroid
_varying_, i.e., changing over time. As opposed to there being a
_differing_ between one salesdroid and another.

(Hey, you probably would call this pedanticism. I call it cheap
entertainment.)

I remember one Red Hat salesdroid who eventually arranged for my $FIRM
around 2006 to send several people including yr. humble correspondent to
spend an afternoon with Ulrich Drepper about our glibc problems -- with
no results at all. Yay, scarlet chapeau. Rah, rah. (I see he got
replaced in 2012.)
Bryan J Smith
2015-11-07 13:22:16 UTC
Permalink
I've seen the view from Drepper's shoes. People don't appreciate the tough
positions he's put in. By default he has to avoid breaking API/ABI, versus
various changes people rant and complain about.

He left Red Hat almost 5 years ago to go to Wall Street. The timing was not
coincidental, as Red Hat had a great "brain drain" to Wall Street and other
firms 2010-2011. Some was very preventable, sadly enough.

Linus, Drepper, LP, et al. I can not only see their views, but defend their
stances. When 99 out of 100 emails you receive every day is rudimentary
naivity of the Grand scheme, it tends to cause them to be desensitized.

DISCLAIMER: Sent from phone, please excuse any typos
--
Bryan J Smith - Technology Mercenary
Post by Rick Moen
Post by Bryan J Smith
Red Hat Salespeople vary.
^^^^
Perhaps from season to season?
I must confess, I'm truly charmed by the notion of a Red Hat salesdroid
_varying_, i.e., changing over time. As opposed to there being a
_differing_ between one salesdroid and another.
(Hey, you probably would call this pedanticism. I call it cheap
entertainment.)
I remember one Red Hat salesdroid who eventually arranged for my $FIRM
around 2006 to send several people including yr. humble correspondent to
spend an afternoon with Ulrich Drepper about our glibc problems -- with
no results at all. Yay, scarlet chapeau. Rah, rah. (I see he got
replaced in 2012.)
_______________________________________________
Tech mailing list
http://lists.golug.org/listinfo/tech
Rick Moen
2015-11-07 16:17:23 UTC
Permalink
Post by Bryan J Smith
I've seen the view from Drepper's shoes. People don't appreciate the tough
positions he's put in. By default he has to avoid breaking API/ABI, versus
various changes people rant and complain about.
Yes, I'm sure he had to deal with that a lot. Proprietary software
losers sing that song incessantly. I cannot recall what broken glibc
feature we tried to get fixed, but it wasn't of the 'you promised
nothing would ever changed' complaint you allude to.
Post by Bryan J Smith
He left Red Hat almost 5 years ago to go to Wall Street. The timing was not
coincidental, as Red Hat had a great "brain drain" to Wall Street and other
firms 2010-2011. Some was very preventable, sadly enough.
There was a lot of corporate badness inside Red Hat. I heard about that
from some of my fellow ex-VA Linux Systems people.

When I said Drepper got replaced, I didn't mean his leaving Red Hat but
rather his removal as glibc maintainer. My $FIRM weren't the only
people who found him difficult to work with.
Bryan J Smith
2015-11-07 17:07:43 UTC
Permalink
Post by Rick Moen
Yes, I'm sure he had to deal with that a lot. Proprietary software
losers sing that song incessantly. I cannot recall what broken glibc
feature we tried to get fixed, but it wasn't of the 'you promised
nothing would ever changed' complaint you allude to.
I'm just saying that even when people think it's not, it is. There
are always many factors in many decisions in the Upstream, even if
there _is_ a "sound" case why it "should" change.
Post by Rick Moen
There was a lot of corporate badness inside Red Hat. I heard about that
from some of my fellow ex-VA Linux Systems people.
I just mentioned Red Hat Services had a lot of "brain drain"
2010-2011. I did my part to try to retain people. But, ultimately, I
had to notify higher-ups when I started getting head hunters who
directly asked me, "Is Red Hat laying off? I've got a lot of RHCAs
looking."

But Red Hat Engineering has a very, very high retention rate.
Unfortunately for Red Hat, sometimes the Services affects other
things. Case-in-Point: I know who in Services pulled a lot of really
top Engineering talent with them, over to Wall Street.
Post by Rick Moen
When I said Drepper got replaced, I didn't mean his leaving Red Hat but
rather his removal as glibc maintainer. My $FIRM weren't the only
people who found him difficult to work with.
A lot of people found him very difficult to work with, as well as
Linus, LP, etc... There's a long list.

Even between Red Fedoras, there is a lot of argument between
maintainers who "take it in the crotch," repeatedly, every day, but do
_not_ react (DJ Delorie comes to mind), and those who "tell others
off." But I know those who are "more vocal" are debated. In my view,
I let those who "take it in the crotch" and don't react speak. I
don't feel the need to say anything, other than thank them for their
contributions.

Understand I'm _not_ saying I'm a fan of how some people do things.
But I _do_ know where it comes from.

And then there are many companies are trying to get most things GPL
replaced with BSD/MIT licenses, or constantly try to feed non-GPL
compatible patches, which is an added annoyance. Having BSD people
saying "Linux people are NIH," despite the _legal_realities_ of
various, BSD/MIT "compatible" proprietary patches (many of them having
"strings attached"), gets old.

I've seen that as well, way too often.

-- bjs
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
Rick Moen
2015-11-07 17:27:12 UTC
Permalink
Post by Bryan J Smith
Post by Rick Moen
Yes, I'm sure he had to deal with that a lot. Proprietary software
losers sing that song incessantly. I cannot recall what broken glibc
feature we tried to get fixed, but it wasn't of the 'you promised
nothing would ever changed' complaint you allude to.
I'm just saying that even when people think it's not, it is.
This was not something that had changed and $FIRM failed to adapt.
It was something that was simply broken, period. (Sorry, cannot
remember much in the way of details.)
Post by Bryan J Smith
There are always many factors in many decisions in the Upstream, even
if there _is_ a "sound" case why it "should" change.
To be sure -- but this was not change.

(You spend a lot of time ignoring the specifics of what I say, acting
as if I said something entirely different, and then talking past me.
Not a complaint; I hope you enjoy yourself. I merely mention that it
fails to be a conversation.)
Post by Bryan J Smith
A lot of people found him very difficult to work with, as well as
Linus, LP, etc... There's a long list.
Even between Red Fedoras, there is a lot of argument between
maintainers who "take it in the crotch," repeatedly, every day, but do
_not_ react (DJ Delorie comes to mind), and those who "tell others
off." But I know those who are "more vocal" are debated. In my view,
I let those who "take it in the crotch" and don't react speak. I
don't feel the need to say anything, other than thank them for their
contributions.
The particular problem with Drepper was not being crusty (which he was),
but rather his doing a fundamentally lousy job of maintaining glibc. Thus
his 2011 replacement in that particular. Possibly burned out, dunno.
--
Cheers, "Transported to a surreal landscape, a young girl kills the first
Rick Moen woman she meets, and then teams up with three complete strangers
***@linuxmafia.com to kill again." -- Rick Polito's That TV Guy column,
McQ! (4x80) describing the movie _The Wizard of Oz_
Bryan J Smith
2015-11-07 17:40:02 UTC
Permalink
Post by Rick Moen
This was not something that had changed and $FIRM failed to adapt.
It was something that was simply broken, period. (Sorry, cannot
remember much in the way of details.)
To be sure -- but this was not change.
You do understand the concept that things _can_ be broken, but they
will _not_ be changed because it's an existing ABI/API consideration,
correct? I have this discussion _often_ when bugs are found, and why
bugfixes will _not_ be made to the software ... because the bugs have
become the nominal operation. ;)

Yes, I get lots of dumb looks when I explain this. And yet ... the
executive in the room understands it better than the developers who
say, "you must fix it!" No, we'll create a concurrent package that
will alter the operation, but we will _not_ change the _default_
package. Same happens in the Upstream too.
Post by Rick Moen
(You spend a lot of time ignoring the specifics of what I say, acting
as if I said something entirely different, and then talking past me.
Not a complaint; I hope you enjoy yourself. I merely mention that it
fails to be a conversation.)
And you fail to step back and recognize there's more than 1-2
viewpoints on this. Ergo, bugs sometimes become the norm.

In the case of libC, that is very much the case! Drepper constantly
had to deal with people telling him he was an idiot and wrong, when he
was protecting the standard API-ala and via-bug. ;)
Post by Rick Moen
The particular problem with Drepper was not being crusty (which he was),
but rather his doing a fundamentally lousy job of maintaining glibc. Thus
his 2011 replacement in that particular. Possibly burned out, dunno.
But I _do_ know. I had several cases where I wondered why, and then
found out why, _exactly_. ;)

Things came up, internally, and Drepper had a very, very _strong_
reason for keeping the "bug" there. Sure enough, I'd hit the
embargoed (not by Red Hat, but by a 3rd party) Bugzilla, and it made
sense.

I learned a lot about how and why Red Hat maintainers do seemingly
"stupid things" that are allegedly "no brainers," including in the
Upstream, not just Enterprise Linux (although even more so there too).

Drepper was one of the best authorities on how hardware and threading
worked, and a lot of the issues with coherency. At least that was my
experience reading lots of his documentation and rationale for not
changing things.

-- bjs
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
Rick Moen
2015-11-07 18:38:22 UTC
Permalink
Post by Bryan J Smith
You do understand the concept that things _can_ be broken, but they
will _not_ be changed because it's an existing ABI/API consideration,
correct?
Yes. But this didn't involve a change.

You seem to be talking past me again. At length. I hope you are having
fun.
Post by Bryan J Smith
And you fail to step back and recognize there's more than 1-2
viewpoints on this.
You fail to step back and recognise that I wasn't talking about what
you decided to change the subject to.

But go ahead. I'm reasonably sure you are having fun.
Bryan J Smith
2015-11-07 20:58:27 UTC
Permalink
So ... explain "the issue.". Try me.

-- bjs

DISCLAIMER: Sent from phone, please excuse any typos
--
Bryan J Smith - Technology Mercenary
Post by Rick Moen
Post by Bryan J Smith
You do understand the concept that things _can_ be broken, but they
will _not_ be changed because it's an existing ABI/API consideration,
correct?
Yes. But this didn't involve a change.
You seem to be talking past me again. At length. I hope you are having
fun.
Post by Bryan J Smith
And you fail to step back and recognize there's more than 1-2
viewpoints on this.
You fail to step back and recognise that I wasn't talking about what
you decided to change the subject to.
But go ahead. I'm reasonably sure you are having fun.
_______________________________________________
Tech mailing list
http://lists.golug.org/listinfo/tech
Rick Moen
2015-11-07 21:37:41 UTC
Permalink
Post by Bryan J Smith
So ... explain "the issue.". Try me.
My boss and co-workers of that time period would probably remember, but
I'm disinclined to bother them.

If there a specific part of this being about a decade ago, such that I no
longer recall the specifics, that you failed to understand the first
several times, please feel free to explain this to yourself.
Not to me, please. No interest. Thanks!
--
Cheers, "If you see a snake, just kill it.
Rick Moen Don't appoint a committee on snakes."
***@linuxmafia.com -- H. Ross Perot
McQ! (4x80)
Bryan J Smith
2015-11-08 11:38:30 UTC
Permalink
Post by Rick Moen
My boss and co-workers of that time period would probably remember, but
I'm disinclined to bother them.
If there a specific part of this being about a decade ago, such that I no
longer recall the specifics, that you failed to understand the first
several times, please feel free to explain this to yourself.
Not to me, please. No interest. Thanks!
So it's yet another bark about Drepper et al. without any info, just a
general complaint, followed by an insinuation, etc... Well played.
His legend lives on. ;)

I.e., I do _not_ care for complaints about my colleagues with_out_ any info.

I've been a party to many of those complaints, and when I got to the
bottom of it, only to find out it was _un_justified. While I wish
every one of the core maintainers from Red Hat could be as politically
correct as, say, a D.J. Delorie, not everyone can. So after awhile,
people like Drepper, LP, etc... do "clam up" and _ignore_ people, if
they just don't "bark back."

The fact that Red Hat made him available, in-person, says a lot about
the extents a company is willing to go, and Drepper was open to that.
Obviously it sounded like you didn't make your case, so he didn't
budget. But no good deed goes unpunished. You will continue to
complain about it for years to come, with_out_ remembering any facts.

Meanwhile, other companies who had him come out and do the same didn't
just learn from the experience ... they hired him away from Red Hat.
Wall Street firms do that a lot, including in the case of Drepper. ;)
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
Rick Moen
2015-11-08 15:46:46 UTC
Permalink
Post by Bryan J Smith
So it's yet another bark about Drepper et al. without any info, just a
general complaint, followed by an insinuation, etc... Well played.
His legend lives on. ;)
You can assume either that (1) I was telling the truth, or (2) I wasn't.
Either way, I really don't give a damn.
Bryan J Smith
2015-11-08 16:17:58 UTC
Permalink
Post by Rick Moen
You can assume either that (1) I was telling the truth, or (2) I wasn't.
Either way, I really don't give a damn.
I invite you to re-read what you said, like I just did ... in full.

You didn't detail crap, all while complaining about me assuming. It
didn't matter how many times I gave examples, you kept assuming it
wasn't related.

And in the end ... you, again, didn't detail crap.

So just WTF do you expect me to do, when you lambast one of my
colleagues? That's what always "gets me into trouble," for people
lambasting my colleagues without details.

Sorry I asked for them. Excuse me for doing so.

-- bjs
Rick Moen
2015-11-08 16:26:00 UTC
Permalink
Post by Bryan J Smith
I invite you to re-read what you said, like I just did ... in full.
No, sorry, it was a waste of time the first time through.
Bryan J Smith
2015-11-08 16:30:44 UTC
Permalink
Post by Rick Moen
Post by Bryan J Smith
I invite you to re-read what you said, like I just did ... in full.
No, sorry, it was a waste of time the first time through.
In other words ...
- You not only want the last word,
- You have to complain about maintainers, and
- You have to win

I'm the continual a-hole because I call people out when they criticize
people I know and have worked with. They don't see what it does, and
they -- like you here -- often _fail_ to detail ... anything.

And the sad thing here is ... Red Hat made him available to you too.
No good deed goes unpunished I guess. Good job.

In 98-99% of cases, people just don't like the answer. And that's on
them. It's very, very hard for a "core maintainer" to be well liked.
And those 98-99% drown out the 1-2% of us that have legitimate
complaints.

All I asked was you to detail your experiences. But I guess you just
want to complain, as well as insinuate. But that makes me the a-hole.
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
Rick Moen
2015-11-08 16:37:00 UTC
Permalink
Post by Bryan J Smith
In other words ...
Not interested. Bye!
--
Cheers, "If you see a snake, just kill it.
Rick Moen Don't appoint a committee on snakes."
***@linuxmafia.com -- H. Ross Perot
McQ! (4x80)
Steve Litt
2015-11-05 16:26:24 UTC
Permalink
Huh?

On Thu, 5 Nov 2015 11:16:58 -0500
Post by Shawn McMahon
They should start a community site for LEAP contributors and friends.
They could use a domain for it such as "leap-cf.org".
Post by Bryan J Smith
If you haven't heard, the OpenSuSE Project has released it's own
"Enterprise Linux Rebuild" using SuSE Linux Enterprise sources --
OpenSuSE Leap -- essentially SuSE's own "CentOS."
Until the Novell purchase, SuSE did _not_ release Source RPMs for
its SLE Server (SLES) and Desktop (SLED) products. Novell reversed
that shortly after purchase, and even open sourced many tools
(short of the "crown jewels" of Novell itself, not quiet going 100%
open source). But no project bothered to rebuild SLES/SLED from
SRPMs. But that now changes with OpenSuSE Leap.
The project is new, and starts at version 42 (the answer to
everything), but they are going to try to mix in select, different
Upstream packages to keep a few things more current. The Fedora
Project's Extra Packages for Enterprise Linux (EPEL) does similarly,
but only for non-core RHEL add-ons, not the core/base channel. Red
Hat does rebase some select packages regularly, usually desktop
applications (Client/Desktop/Workstation base and the Server
"Optional" channels), but typically avoids such, other than those in
Software Collections (SCL).
It'll be interesting to see how this effort develops long-term, but
it's definitely nice to see another free "Enterprise Linux" rebuild.
-- bjs
---------- Forwarded message ----------
Date: Wed, Nov 4, 2015 at 11:29 AM
Subject: [opensuse-announce] openSUSE Leap 42.1 is RELEASED
The wait is over and a new era begins for openSUSE releases.
Contributors, friends and fans can now download the first Linux
hybrid distro openSUSE Leap 42.1. Since the last release, exactly
one year ago, openSUSE transformed its development process to
create an entirely new type of hybrid Linux distribution called
openSUSE Leap.
Version 42.1 is the first version of openSUSE Leap that uses source
from SUSE Linux Enterprise (SLE) providing a level of stability that
will prove to be unmatched by other Linux distributions. Bonding
community development and enterprise reliability provides more
cohesion for the project and its contributor’s maintenance updates.
openSUSE Leap will benefit from the enterprise maintenance effort
and will have some of the same packages and updates as SLE, which is
different from previous openSUSE versions that created separate
maintenance streams.
Community developers provide an equal level of contribution to Leap
and upstream projects to the release, which bridges a gap between
matured packages and newer packages found in openSUSE’s other
distribution Tumbleweed.
Since the move was such a shift from previous versions, a new
version number and version naming strategy was adapted to reflect
the change. The SLE sources come from SUSE’s soon to be released
SLE 12 Service Pack 1 (SP1). The naming strategy is SLE 12 SP1 or
12.1 + 30 = openSUSE Leap 42.1. Many have asked why 42, but SUSE
and openSUSE have a tradition of starting big ideas with a four and
two, a reference to The Hitchhiker’s Guide to the Galaxy.
Every minor version of openSUSE Leap users can expect a new KDE and
GNOME, but today is all about openSUSE Leap 42.1, so if you are
tired of a brown desktop, try a green one.
Thank You to everyone who helped make this big Leap a success
Have a lot of fun, and get thinking about how we can make Leap 42.2
even better :)
Regards,
Richard Brown
openSUSE Board Chairman
--
--
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
Shawn McMahon
2015-11-05 16:29:13 UTC
Permalink
The product is called LEAP. The domain "leap-cf.org" is up for sale, only
99 Euros. A bargain, and makes perfect sense for their product's community
efforts.
Post by Steve Litt
Huh?
On Thu, 5 Nov 2015 11:16:58 -0500
Post by Shawn McMahon
They should start a community site for LEAP contributors and friends.
They could use a domain for it such as "leap-cf.org".
Post by Bryan J Smith
If you haven't heard, the OpenSuSE Project has released it's own
"Enterprise Linux Rebuild" using SuSE Linux Enterprise sources --
OpenSuSE Leap -- essentially SuSE's own "CentOS."
Until the Novell purchase, SuSE did _not_ release Source RPMs for
its SLE Server (SLES) and Desktop (SLED) products. Novell reversed
that shortly after purchase, and even open sourced many tools
(short of the "crown jewels" of Novell itself, not quiet going 100%
open source). But no project bothered to rebuild SLES/SLED from
SRPMs. But that now changes with OpenSuSE Leap.
The project is new, and starts at version 42 (the answer to
everything), but they are going to try to mix in select, different
Upstream packages to keep a few things more current. The Fedora
Project's Extra Packages for Enterprise Linux (EPEL) does similarly,
but only for non-core RHEL add-ons, not the core/base channel. Red
Hat does rebase some select packages regularly, usually desktop
applications (Client/Desktop/Workstation base and the Server
"Optional" channels), but typically avoids such, other than those in
Software Collections (SCL).
It'll be interesting to see how this effort develops long-term, but
it's definitely nice to see another free "Enterprise Linux" rebuild.
-- bjs
---------- Forwarded message ----------
Date: Wed, Nov 4, 2015 at 11:29 AM
Subject: [opensuse-announce] openSUSE Leap 42.1 is RELEASED
The wait is over and a new era begins for openSUSE releases.
Contributors, friends and fans can now download the first Linux
hybrid distro openSUSE Leap 42.1. Since the last release, exactly
one year ago, openSUSE transformed its development process to
create an entirely new type of hybrid Linux distribution called
openSUSE Leap.
Version 42.1 is the first version of openSUSE Leap that uses source
from SUSE Linux Enterprise (SLE) providing a level of stability that
will prove to be unmatched by other Linux distributions. Bonding
community development and enterprise reliability provides more
cohesion for the project and its contributor’s maintenance updates.
openSUSE Leap will benefit from the enterprise maintenance effort
and will have some of the same packages and updates as SLE, which is
different from previous openSUSE versions that created separate
maintenance streams.
Community developers provide an equal level of contribution to Leap
and upstream projects to the release, which bridges a gap between
matured packages and newer packages found in openSUSE’s other
distribution Tumbleweed.
Since the move was such a shift from previous versions, a new
version number and version naming strategy was adapted to reflect
the change. The SLE sources come from SUSE’s soon to be released
SLE 12 Service Pack 1 (SP1). The naming strategy is SLE 12 SP1 or
12.1 + 30 = openSUSE Leap 42.1. Many have asked why 42, but SUSE
and openSUSE have a tradition of starting big ideas with a four and
two, a reference to The Hitchhiker’s Guide to the Galaxy.
Every minor version of openSUSE Leap users can expect a new KDE and
GNOME, but today is all about openSUSE Leap 42.1, so if you are
tired of a brown desktop, try a green one.
Thank You to everyone who helped make this big Leap a success
Have a lot of fun, and get thinking about how we can make Leap 42.2
even better :)
Regards,
Richard Brown
openSUSE Board Chairman
--
--
--
Bryan J Smith - http://www.linkedin.com/in/bjsmith
_______________________________________________
Tech mailing list
http://lists.golug.org/listinfo/tech
Loading...