It’s similar to Google Keep, Evernote or Obsidian but feels a lot easier to use and, of course, I don’t need to worry about a company discontinuing it or suddently making it a lot more expensive.
Source: Trilium by Stephen Downes
Build your personal knowledge base with Trilium Notes – GitHub – zadam/trilium: Build your personal knowledge base with Trilium Notes
It’s similar to Google Keep, Evernote or Obsidian but feels a lot easier to use and, of course, I don’t need to worry about a company discontinuing it or suddently making it a lot more expensive.
Source: Trilium by Stephen Downes
I, like Luhmann, see writing as integral to complex thinking.4 Therefor, my goal in keeping a zettelkasten is to link ideas and turn these links into essays, blog posts, books, and content for my newsletter. Writing is, to use Karl Weick’s term, an act of sensemaking. A generic note storing system—were it sentient enough to have feelings—would not care if you made sense of the way your notes interacted or if you turned that sense into a book. But, a zettelkasten—with equal sentience—does. A zettelkasten “cares” that you connect your ideas and find ways to express them, because doing brings life to you, your creative work, and the zettelkasten itself.
Yes, the physiology and biomechanics of treadmill running are a little different. But how you feel about it is probably more important.
So if you want to run on the treadmill, be my guest. It’s not my cup of tea, but there are plenty of great runners who’ve sworn by it. My favorite example is probably Christine Clark, the treadmill-trained Alaskan who emerged from the boreal winter to win the 2000 Olympic Marathon Trials in South Carolina. If you do, don’t worry too much about whether a 7:00 treadmill mile equates to a 6:55 or a 7:05 outside. Just run, get suitably tired, and repeat. The pace, when you get back outside, will take care of itself.
In some ways, this reminds me of the debate around physical note taking versus typing on the computer.
The answer is not intuitively obvious, as there are both advantages and disadvantages to each method. Humans on average type faster than we write with a pen, which allows for the recording of more information, even verbatim transcripts. But computers are host to a number of distractions in class, including alerts, YouTube, and social media. A Word document is also a poor medium for information that goes beyond sentences, like charts and graphics. With regards to the pen, its relative slowness means the student often has to decide on the spot what’s important and what’s not, and how best to rephrase the information and organize it. This sounds like a boon for comprehension… but it requires more brain energy, which may mean important details spoken by the professor end up ignored while the brain wrestles with the information.
Collection deals with NAPLAN in Australia, but our introductory and concluding chapters seek to situate the research reported here in a broader global context, aware of the circulation today of globalised education policy discourses and the significance of international testing as a complement to national testing such as NAPLAN.
Unlike other national testing regimes such as the National Assessment of Educational Progress (NAEP) in the US or the Pan-Canadian Assessment Program (PCAP), NAPLAN is a census test, not a sample test.
NAPLAN data are thus used for a variety of purposes, including governing school systems, accountability purposes, managing staff within systems and schools, and making educational decisions regarding curriculum and pedagogy in systems, schools and classrooms.
Together, NAPLAN, MySchool and the raft of programs and contractual arrangements between governments and schools that reference testing data illustrate the pervasiveness of technocratic rationality in Australian schooling
NAPLAN was established to improve teaching and learning outcomes, but one significant effect has been that much teaching is now aimed at improving NAPLAN scores.
NAPLAN data were useful in providing a common language for communication between principals, teachers and parents about student progress and achievement.
In summary, we would say that a NAPLAN test only provides an indicative level of the performance of a student: whether the student is struggling, on track, or performing above average. The NAPLAN tests do not provide fine grading of students by their performance levels because of the large uncertainties associated with the ability measures.
If teachers do not change the way they teach, the school mean scores for a year level can vary within a range of 32 NAPLAN points for 90% of the time if we have the opportunity to repeatedly allocate random samples of potential students to this school. Compare this margin of error with the expected annual growth rates of 44 points at Year 3, 28 points at Year 5, and 21 points at Year 7; the fluctuation in school mean scores due to a particular cohort of students has a magnitude close to one year of growth. This means that for many schools with a year level size of 50 or fewer, the average school performance could change significantly from one calendar year to another.
We need to always remember that using student assessment data to evaluate teachers is making an inference, since we have not directly measured teacher performance. The validity of making this inference needs to be checked in every case.
One should never jump to conclusions of ineffective schools whenever NAPLAN results are low. NAPLAN results indicate where further investigations are warranted.
As teacher effect accounts for only a small portion of the student achievement variance, individual teacher effect is likely to be swamped by the large variations in student abilities in a class. This is a reliability issue.
In conclusion, national testing data can inform us about performances of large groups of students, but not tell us a great deal about individual students or schools. National testing data cannot provide teacher performance measures, so there should not be any link between student test results and teacher appraisal or pay. National testing data have the potential to inform teaching and learning, and to frame education policies. However, we need to ensure that evidence-based decision making is backed by sound data and valid inferences.
Focusing on NAPLAN and MySchool as interesting objects – as actors in their own right, rather than as effects or products of neoliberal governance strategies – provides the opportunity to explore the technologies and mechanisms through which such objects serve to delegate trust, create new intimacies and reorganise relations.
By providing access to much more detail about each school, it brought parents closer to knowing their child’s school. It also revealed to schools themselves information that they previously did not have about themselves and about other schools.
Here I take NAPLAN and MySchool to be calculative objects – objects that resulted from policy decisions, to be sure, but which also became participants in the policy arena, actively rearranging the goals of schools, parents, teachers and policy makers and bringing to the forefront new issues and problems. I present four specific features or functions of interesting objects: creating new intimates, translating interests, displacing trust and creating informed publics.
Not only did MySchool become a technology through which the government entered intimate spaces of schools, schools themselves entered intimate spaces of living rooms and kitchens through discussions between parents
By involving parents in the job of keeping schools accountable and in continually improving their performance, parents and the government were cast as intimates – partners in the shared enterprise of school improvement.
By inserting itself between parents and their child’s school, MySchool attempted to enrol parents as canny stakeholders, casting the schools as secretive actors who were reluctantly being forced to reveal information they would rather have kept to themselves
NAPLAN and MySchool thus changed the original goals, motivations and plans of various actors
NAPLAN and MySchool thus created relations of distrust and suspicion between schools and the government, as well as schools and the public. They displaced trust from local actors with immediate knowledge and delegated trust instead to distant and impersonal actors.
NAPLAN and MySchool produce an abstract, impoverished and interested version of the very complex phenomenon of schooling in Australia. However, these interested observations of NAPLAN and MySchool are not merely providing useful, detailed accounts of Australian schooling; rather, they are actually changing the very nature of Australian schooling, so that it is beginning to more closely resemble the abstract version presented on the MySchool website. Rather than NAPLAN and MySchool reflecting an abstract version of Australian schooling, they are perhaps remaking Australian schooling in their image.
As Strathern (1997) states: ‘When a measure becomes a target it ceases to be a good measure’ (308).
In the context of NAPLAN, while the tests may measure attainment in numeracy or literacy, it is questionable whether the information from these tests can be used validly for explaining how well the school has performed. Yet the aggregation of test scores across students to provide composite measures of educational effectiveness for teachers, schools, states or even the nation are commonly used in education for accountability purposes.
What policymakers intend is always mediated by how policy ‘hits the ground’, or is enacted, by individuals in diverse, complex community and institutional settings.
It must be stressed that NAPLAN is designed to change practice and behaviour through the emphasis on test-based accountabilities. However, not all change is desirable
The most dangerous possibility of testing data is that it distorts and corrupts the very processes it intends to measure. As education policy makers seem intent on continuing to use test data to steer practice from a distance, it remains to be seen how this distortion can be prevented.
In the case of schools, the use of NAPLAN results as a blunt accountability instrument through their publication on the MySchool website has significantly increased the pressure on schools to treat NAPLAN results as more than just a snapshot of student achievement at a particular point in time
First, rather than NAPLAN itself being the central issue of concern in this instance, it is the use of NAPLAN results in largely inappropriate ways that is likely to be generating serious negative consequences
Second, these types of findings, and the likely reasons behind them, suggest a serious lack of knowledge amongst some policy makers, bureaucrats, principals, teachers and parents about the limitations of NAPLAN results (and indeed, any single test score)
Overall, it seems evident that the NAPLAN program is generating stress-related responses amongst substantial numbers of students across Australia. While there is a need for further research to elucidate the reasons behind this, it is highly likely that the use of NAPLAN results in inappropriate ways is contributing to student stress through the messages sent to students in the words and actions of principals, teachers and parents. Blaming these groups is not the way forward – rather, the time has come to discuss the relevance of NAPLAN, whether the benefits are worth the substantial costs (including psychological), and if NAPLAN is to continue, what the appropriate, statistically defensible and reasonable use of student test results might look like.
The common agreement for literacy is a school-based policy, collaboratively developed between teachers and leaders, that prescribes what should be included in the daily uninterrupted literacy block. The block includes: guided reading, Jolly Phonics (Reception – Year 2), explicit teaching of comprehension strategies, daily reading practice (Choosing to Read), shared reading, handwriting, writing, spelling program, grammar and punctuation, as well as the locally mandated assessments to be undertaken over the year and the SMARTA (Specific/Student focused; Measurable; Achievable; Relevant; Time-lined; Agreed) targets for reading endorsed by the region. All teachers are given copies of the literacy agreement in their induction folders at the beginning of the school year and they were posted prominently on the notice board in the staff room.A locally generated text, the literacy agreement has come into existence as a result of very low NAPLAN results. It not only reflects the programs that teachers considered to be valuable, but the shaping force of NAPLAN. In this way, NAPLAN regulates the school’s common literacy agreement, constitutes the literacy problem and coordinates everyday classroom work in more or less obvious ways. For instance, the literacy component of NAPLAN includes a reading comprehension test, a writing test (genre writing), a spelling test and a grammar and punctuation test
As we have seen, Sandford has engaged with the unavoidable accountability requirements associated with NAPLAN. We have shown the extent to which NAPLAN has evoked a narrow view of literacy as the practice of content-free skills and how this view is reproduced in the active and occurring text of the literacy agreement that shapes what happens in classrooms. Nevertheless, NAPLAN does not always dominate what can be said. The potential sedimentation of NAPLAN is unraveled and reworked, at least to some degree, in the literacy chats, a product of the school’s recognition of the teachers’ needs for professional mentoring conversations that take account of actual students and their learning trajectories. In these educative and dialogical spaces, the senior leader works with teachers to design pedagogical interventions for students whose progress in school literacy learning is cause for concern. However, it is not only a question of looking at data as an artefact of the student, as the excerpt of Carrie’s literacy chat indicates. In mediating translocal policies that might otherwise close down possibilities for engaging ethically with students, the senior leader offers teachers the possibility of creative and critical literacy pedagogies. Despite their value in turning teachers around to students’ knowledge and practices as resources for school literacy learning, such pedagogies are less and less visible in schools since the advent of NAPLAN.
… a warm-up session to ensure students were ready to learn;an ‘I do’ session in which the teacher demonstrated the specific task which was the focus of the lesson;a ‘we do’ session in which teachers worked with students as a whole class to co-construct a model response;a ‘you do’ activity involving students working independently;and a ‘ploughing back’ session in which students revised the lesson objectives and outcomes
Strong average performance in numeracy by some LBOTE students is not simply ascribed to a cultural fixation on academic attainment but may be a reflection of numeracy skills attained through comprehensive educational backgrounds;this strong average performance clouds the heterogeneity of the LBOTE category;LBOTE classification encompasses a broad heterogeneous group of students, which in the absence of a measure of English language proficiency, is most evident when NAPLAN results are disaggregated according to visa status of LBOTE students. Visa, in turn, is informative about disadvantage related to prior educational opportunities because students of refugee background are performing far below those of other migration categories, particularly the skilled visa category;language proficiency levels and years of schooling are associated with NAPLAN outcomes; andstudents who are of refugee background, with reduced years of schooling, and in the early stages of acquiring English are most disadvantaged in NAPLAN test results, but are completely hidden in the LBOTE category.
NAPLAN data need to be interpreted and understood within the context of language learning, whereas, in its current form, the breadth of LBOTE can only render a shallow interpretation, which dangerously ignores understandings about academic second language development.
Evident in the above is how, over the years of NAPLAN administration, support for students with different needs – social and emotional, language background, learning difficulties – to participate in NAPLAN has narrowed to serve the priority of administrative consistency.
NAPLAN data were reported to have little utility compared to information already obtained: [NAPLAN] does not provide us with any information about students that we don’t already know ourselves. We profile our students. And it just gives us another piece of information that we would otherwise have anyway.(Principal, independent PY–12 school)
…teachers reported positive value from NAPLAN as confirming their own professional judgements
Following Simons (2014), international and national tests can be seen to function as global/national positioning devices, evidence of a new spatial disposition and, in Australia, evidence of the emergence of a numbers-based national system of schooling. While these developments provide some evidence of a world polity approach that talks about the global diffusion of modernity and also the global dissemination of a particular version of science and social science, they also reflect the global impacts of an Anglo-American model of school reform based very much on test-based, top-down modes of educational accountability.
There is a common perception that testing data are inert, lifeless objects that provide an unbiased and objective measure of educational process, practices and outcomes.
However, we must be careful in making this claim that there is a life of data. In its most extreme form, this can lead to positing data as an agentive actor that makes decisions and behaves in certain ways. This is clearly not the case – data are expressions of human subjectivity, an expression of the values, sensibilities, processes that lead to their creation, and then the paths that the data lay down for individuals in terms of their choices, actions and acts of enunciation. Data are thus part of new spaces of subjectivity that are not contained within human bodies, but instead extend into information systems such as testing regimes, but also other data-driven applications such as social media or mobile phone usage. To understand the life of data, then, is to recognise that data produce possibilities and are invoked through the behaviours and values that result from the production of data. We cannot see data as external to the production of subjectivity, rather as Guattari (1992) argues, there is a little piece of human subjectivity in each data point: the technologies that we use to engage with data ‘are nothing more than hyperdeveloped and hyperconcentrated forms of certain aspects of human subjectivity’ (18).
Data have a life, they are always and everywhere put to work, they are always and everywhere in motion. One demonstration of this principle was highlighted by Nichols and Berliner (2007). Their argument was that the higher the stakes attached to any single measure that is used to make important decisions about students, teachers and schools, the more liable it is that the initial measure becomes corrupted because the processes are distorted by the emphasis. This is called ‘Campbell’s Law’, which stipulates: …the more any quantitative social indicator is used in social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it was intended to measure.(Nichols & Berliner 2007: 27)For example, tests like NAPLAN, which are designed to measure student achievement in the constructs of basic literacy and numeracy skills, become corrupted when teachers devote excessive class time to preparing for the tests. In other words, the tests no longer measure constructs regarding literacy and numeracy, rather they begin to measure the construct of how well a teacher can prepare a class. Obviously this is a problem, if important decisions are being made about literacy and numeracy on data that do not measure what they purport to measure, such decisions may not drive the improvements that were intended.
If data have lives, they are enacted through the space and time of data, and notions like consequential validity advanced by test developers themselves speak to this life
The critical question then is ‘what ought to be the future orientation to data at all levels of schooling’? This is primarily a political question and it needs to trouble the thinking and work of politicians, policy makers, system leaders, principals, teachers, students, the broader community and also educational researchers. I
Given this, we are not opposed to national testing, but we do believe that our assessments of national testing clearly point to areas where action must be taken to reduce its negative effects in Australia and elsewhere.
Fraternize means to behave like a brother. Luke told me that. He said there was no corresponding word that meant to behave like a sister. Sororize, it would have to be, he said.
The young ones are often the most dangerous, the most fanatical, the jumpiest with their guns. They haven’t yet learned about existence through time. You have to go slowly with them.
There is more than one kind of freedom, said Aunt Lydia. Freedom to and freedom from. In the days of anarchy, it was freedom to. Now you are being given freedom from. Don’t underrate it.
Would like to believe this is a story I’m telling. I need to believe it. I must believe it. Those who can believe that such stories are only stories have a better chance. If it’s a story I’m telling, then I have control over the ending. Then there will be an ending, to the story, and real life will come after it. I can pick up where I left off. It isn’t a story I’m telling. It’s also a story I’m telling, in my head, as I go along. Tell, rather than write, because I have nothing to write with and writing is in any case forbidden. But if it’s a story, even in my head, I must be telling it to someone. You don’t tell a story only to yourself. There’s always someone else. Even when there is no one.
I read about that in Introduction to Psychology; that, and the chapter on caged rats who’d give themselves electric shocks for something to do. And the one on the pigeons, trained to peck a button which made a grain of corn appear. Three groups of them: the first got one grain per peck, the second one grain every other peck, the third was random. When the man in charge cut off the grain, the first group gave up quite soon, the second group a little later. The third group never gave up. They’d peck themselves to death, rather than quit. Who knew what worked?
Detailing the rise of automation over time, Rise of the Robots by Martin Ford outlines a number of possible futures and the choices that we have.
This is a part of the acceleration of technology and machine learning. This has led to improvements in productivity, but not in pay. The average workers are not benefiting. The implication of this is that there is less to spend on other things beyond beyond the essentials, which has a negative impact on the global economy.
The reality of this situation is that the future is not set. A number of outcomes are on the cards. For example, if fast food and retail is automated then this could lead to mass unemployment, with the rich investing in gold rather than goods. Another possibility is a market based solution, where we embrace something like a negative income tax. Whatever the answer is, Ford argues that we need to adapt.
Here then is a collection of my quotes:
As of 2013, a typical production or non-supervisory worker earned about 13 percent less than in 1973 (after adjusting for inflation), even as productivity rose by 107 percent and the costs of housing, education, and healthcare have soared.
Machines themselves are turning into workers, and the line between the capability of labor and capital is blurring as never before.
The top 5 percent of households are currently responsible for nearly 40 percent of spending, and that trend toward increased concentration at the top seems almost certain to continue. Jobs remain the primary mechanism by which purchasing power gets into the hands of consumers. If that mechanism continues to erode, we will face the prospect of having too few viable consumers to continue driving economic growth in our mass-market economic system.
According to the International Federation of Robotics, global shipments of industrial robots increased by more than 60 percent between 2000 and 2012, with total sales of about $28 billion in 2012. By far the fastest-growing market is China, where robot installations grew at about 25 percent per year between 2005 and 2012.3
Vending machines make it possible to dramatically reduce three of the most significant costs incurred in the retail business: real estate, labor, and theft by customers and employees.
the problem is not that more jobs are being destroyed in downturns; it is that fewer are being created during recoveries.
From the perspective of any one individual, inequality can be very difficult to perceive. Most people tend to focus their attention locally. They worry about how they are doing relative to the guy next door as opposed to the hedge fund manager they will, in all likelihood, never encounter.
Among the forces poised to shape the future, information technology stands alone in terms of its exponential progress. Even in nations whose political environments are far more responsive to the welfare of average workers, the changes wrought by technology are becoming increasingly evident. As the technological frontier advances, many jobs that we would today consider nonroutine, and therefore protected from automation, will eventually be pulled into the routine and predictable category. The hollowed-out middle of the already polarized job market is likely to expand as robots and self-service technologies eat away at low-wage jobs, while increasingly intelligent algorithms threaten higher-skill occupations.
Moore’s Law is the best-known measure of advancing computer power, but information technology is, in fact, accelerating on many different fronts. For example, computer memory capacity and the amount of digital information that can be carried on fiber-optic lines have both experienced consistent exponential increases. Nor is the acceleration confined to computer hardware; the efficiency of some software algorithms has soared at a rate far in excess of what Moore’s Law alone would predict. While exponential acceleration offers valuable insight into the advance of information technology over relatively long periods, the short-term reality is more complex. Progress is generally not always smooth and consistent; instead, it often lurches forward and then pauses while new capabilities are assimilated into organizations and the foundation for the next period of rapid advance is established. There are also intricate interdependencies and feedback loops between different realms of technology. Progress in one area may drive a sudden burst of innovation in another. As information technology marches forward, its tentacles reach ever deeper into organizations and the overall economy, often transforming the way people work in ways that can further its own advance.
Even if the advance of computer hardware capability were to plateau, there would remain a whole range of paths along which progress could continue. Information technology exists at the intersection of two different realities. Moore’s Law has dominated the realm of atoms, where innovation is a struggle to build faster devices and to minimize or find a way to dissipate the heat they generate. In contrast, the realm of bits is an abstract, frictionless place where algorithms, architecture (the conceptual design of computing systems), and applied mathematics govern the rate of progress. In some areas, algorithms have already advanced at a far faster rate than hardware.
IT has evolved into a true general-purpose technology. There are very few aspects of our daily lives, and especially of the operation of businesses and organizations of all sizes, that are not significantly influenced by or even highly dependent on information technology. Computers, networks, and the Internet are now irretrievably integrated into our economic, social, and financial systems.
Information technology, to a degree that is unprecedented in the history of technological progress, encapsulates intelligence. Computers make decisions and solve problems. Computers are machines that can—in a very limited and specialized sense—think. No one would argue that today’s computers approach anything like human-level general intelligence. But that very often misses the point. Computers are getting dramatically better at performing specialized, routine, and predictable tasks, and it seems very likely that they will soon be poised to outperform many of the people now employed to do these things.
Today’s computer technology exists in some measure because millions of American middle-class taxpayers supported federal funding for basic research in the decades following World War II. We can be reasonably certain that those taxpayers offered their support in the expectation that the fruits of that research would create a more prosperous future for their children and grandchildren. Yet, the trends we looked at in the last chapter suggest we are headed toward a very different outcome.
The Quill narrative-writing engine is just one of many new software applications being developed to leverage the enormous amounts of data now being collected and stored within businesses, organizations, and governments across the global economy. By one estimate, the total amount of data stored globally is now measured in thousands of exabytes (an exabyte is equal to a billion gigabytes), and that figure is subject to its own Moore’s Law-like acceleration, doubling roughly every three years.6 Nearly all of that data is now stored in digital format and is therefore accessible to direct manipulation by computers. Google’s servers alone handle about 24 petabytes (equal to a million gigabytes)—primarily information about what its millions of users are searching for—each and every day.7
All this data arrives from a multitude of different sources. On the Internet alone, there are website visits, search queries, emails, social media interactions, and advertising clicks, to name just a few examples. Within businesses, there are transactions, customer contacts, internal communications, and data captured in financial, accounting, and marketing systems.
The evaporation of thousands of skilled information technology jobs is likely a precursor for a much more wide-ranging impact on knowledge-based employment.
I would argue that “free trade” is the wrong lens through which to view offshoring. Instead, it is much more akin to virtual immigration.
When offshoring is viewed in combination with automation, the potential aggregate impact on employment is staggering.
Innovations like MOOCs, automated marking algorithms, and adaptive learning systems offer a relatively promising path toward eventual disruption.
If the individual-ownership model for cars ultimately falls, the impact on broad swathes of the economy and job market would be extraordinary. Think of all the car dealers, independent repair shops, and petrol stations within a few miles of your home. Their existence is all tied directly to the fact that car ownership is widely distributed. In the world that Google envisions, robotic cars will be concentrated into fleets. Maintenance, repair, insurance, and fueling would likewise be centralized. Untold thousands of small businesses, and the jobs associated with them, would evaporate. To get a sense of just how many jobs might be at risk, consider that, in the UK alone, about 200,000 people work in car washes.24
the first place where self-driving cars make serious inroads might be exactly the area that directly impacts the most jobs.
Workers are also consumers, and they rely on their wages to purchase the products and services produced by the economy.
Markets are driven not just by aggregate dollars but also by unit demand.
Eventually, technology will advance to the point where low wages no longer outweigh the benefits of further automation.
The complexity that operates in the real-world economy is, in many ways, somewhat analogous to that of the climate system, which is likewise characterized by a nearly impenetrable web of interdependencies and feedback effects.
The most important things—food, housing, energy, healthcare, transportation, insurance—are much less likely to see rapid, near-term cost reductions. There’s a real danger that households will end up being squeezed between stagnant or falling incomes and major-expense items that continue to rise in cost.
one is nonetheless left to wonder just how long we have to wait before the promised labor shortages begin to put a dent in unemployment among younger workers.
Consumer markets play a critical role not just in supporting current economic activity but also in advancing the overall process of innovation. While individuals or teams generate new ideas, it is ultimately consumer markets that create the incentive for innovation.
In most areas, no amount of education or training—even from the most elite universities—would make a human being competitive with such machines. Even occupations that we might expect to be reserved exclusively for people would be at risk.
It’s becoming increasingly clear, however, that robots, machine learning algorithms, and other forms of automation are gradually going to consume much of the base of the job skills pyramid. And because artificial intelligence applications are poised to increasingly encroach on more skilled occupations, even the safe area at the top of the pyramid is likely to contract over time.